The Opportunity: Refine the Search Experience 

For library patrons, being able to quickly and easily find a title to read, watch or listen to is a crucial part of their web browsing experience. As a result, providing relevant and lightning-fast catalog search results is a fundamental and complex product requirement of the Bibliocommons’ library services platform.

When the product team introduced a new technology stack, it presented an opportunity to assess and refine the Bibliocommons catalog search experience. Before embarking on the next iteration, we recognized the value of evaluating the current state with library patrons. The first release was intended to make the search results easy to filter and highly relevant to the visitor’s needs. This included changes to the back end, such as  improving the “smart search” algorithm and creating infrastructure that would allow “grouped search results” (aka Functional Requirements for Bibliographic Records). On the front end, the team introduced a new filtering UI and a “pin tool”.

Evaluating a new feature: The Pin Tool

The pin tool was initially introduced by the Product Manager in order to enhance the search experience by reducing the effort and redundancy involved with performing multiple, similar searches. Since a large number of filters can be applied during a catalog search (eg a visitor might want to see new DVDs and books about Muppets that arrived in their branch this week) the pin tool allowed visitors to “pin”  their active filters so they could be applied to a new search (eg if they also wanted to search for new DVDs and books about gardening).  

The pin was not a very common design pattern – and the feature had not been tested before launch – so the Product Manager was curious to know if visitors found it helpful and usable.

The Process: Discover, Iterate, Deploy – or Delete?  

My role, as the new User Experience Designer on the Bibliocommons Product team, was to create  UX research and design artifacts that were user-centred and data-driven. My research plan was informed by interviewing relevant stakeholders, reviewing existing data and evaluating the current state of the interface (against usability heuristics).  Since I was new to the project, this discovery process also helped me understand more about the history of the product, user needs and business goals. 

Given the time constraints and resources available, I recommended we conduct an unmoderated usability study (on desktop and mobile devices, using the UserTesting.com platform). This evaluative UX Research method would allow us to quickly assess the Pin tool while revealing other pain points or user needs The findings would help the Product Manager decide whether the feature should be iterated upon – or abandoned all together.  

Providing tools to support the UX Research Process 

Given that conducting UX Research was fairly new territory for the BiblioCommons product team, I saw an opportunity to cultivate a user-centred mindset and operationalize the research process. To achieve this, I created  aOne-Page Testing Plan” template which is a lean artifact that is useful throughout a research project’s lifecycle, from planning activities to documenting findings.

The One Page Testing Template proved to be valuable for:

  •  Providing a straightforward framework for gathering requirements and feedback from stakeholders. The one page format captures scope, prioritizes objectives, identifies constraints, tasks hypotheses, etc in a concise and visual way. This allowed me to feel confident I could craft a script that would meet the team’s ambitious research objectives.
  • Making the research process more collaborative and visible. Inviting input from core stakeholders allows the team to synch up about research activities and objectives.  This helps surface any misalignment, scope creep or knowledge gaps that may arise.
  • Nurturing a human-centred design process by exposing the team to a consistent, lean and methodical approach to research activities.

Above: The One Page Testing Template succinctly outlines the details of the study, including the research objectives, methodology, devices, screener details, hypotheses, tasks and study timeline. 

The challenges of writing an effective script

Our primary research objectives were to observe searching behaviour and assess whether the pin tool was visible and its purpose was clear (i.e. evaluate its “affordance”). This was tricky to achieve, since the pin tool was only visible when filters are applied after a search– but leading participants to it would be counterproductive because that would  prevent us from assessing its visibility! Directing users to the pin tool would also mean a missed opportunity to evaluate how a visitor interacts with filters (especially when conducting multiple searches).

One drawback of unmoderated tests is that they require participants to complete tasks presented in writing, so there isn’t an opportunity to probe in the moment or help redirect a participant who has gone off track. Plus, the number of tasks that could be completed was influenced by UserTesting’s recommended 15-minute time limit (for unmoderated tests). 

As a result, achieving our objectives required crafting a script that took a very thoughtful approach to task sequence and the wording of follow-up questions. The script took an open to closed-ended approach; that is, before being asked to complete tasks involving the pin tool, participants completed several open-ended tasks that mimicked a typical search. Once they were given ample opportunity to discover and use the pin tool on their own, they were asked whether the pin tool had behaved as they expected. This approach allowed us to observe searching behaviour and assess whether the pin tool was visible and useful.

 

Above: A screenshot of a video from the unmoderated desktop study shows a follow up question about the Pin Tool. The multiple choice format captures metrics that makes analysis much quicker than a verbal response. To generate deeper qualitative insights, participants were also given the opportunity to explain their response.

Using Hypotheses to Inform and Engage the Team

Hypotheses (i.e. the expectations of the test’s outcome) are powerful because they often reveal information like a useful nugget of data, a team’s conflicting priorities or a troubling assumption. This can be helpful for a number of reasons, such as informing scope, resolving misalignment and/or managing bias. Defining a hypothesis can also be a fun way to make the team engaged in the outcome (especially if a wager is involved!). Once analysis is complete, hypotheses can be a helpful way to frame findings in a way that is succinct and familiar. 

For example,

Hypothesis: The pin tool is hard to see

TRUE (especially on Desktop)! Several participants failed to see the pin tool and did not interact with it until being directly asked about it later in the test. This was especially problematic on desktop devices, which suggests that the visibility of the pin tool could be hampered by “right rail blindness” and its distance from the filters (which are a more important point of focus during a search task).

Fostering a collaborative, human-centred design culture

Before creating mockups and specs that captured my design recommendations, I shared a report and some short videos of the findings. The video clips really helped to punctuate the findings and engage the team.

Once the Product Team had a chance to review the findings, I facilitated a “How Might We…?” sketching session. This involved gathering team members from a variety of disciplines (including front end developers, product manager and leadership)  together to explore solutions. This collaborative approach exposed the team to Human-Centred Design process , it also proved to be an effective way to get feedback related to technical feasibility and stakeholder buy-in. The sketching session was also a great excuse  for me to get everyone doodling at work. Er, I mean get the team to practice Visual Thinking principles. Muah ha haaaa..

Above: The results of a “How Might We…?” sketching session. This generated lots of ideas while giving the team an opportunity to participate in a human-centred design process.

Findings & Recommendations

While the study demonstrated that many participants used the pin tool successfully (and found the tool useful), the study revealed that there were some usability issues related to visibility and affordance. Although the pin tool’s visibility issues could be corrected with some straightforward  UI tweaks, another, more troubling, pattern emerged. 

Some participants who used the pin tool during a search were less sure of its purpose after they had interacted with it (even though they had correctly guessed it was for “pinning” or “saving” the filters)! This suggested that the pin tool’s relationship to filters was clear, but the pin tools’ purpose of helping them apply those filters to new search was not registering. We needed to make the pin tool’s relationship to searching more explicit.

To help resolve this issue, I recommended that, while the pin tool was activated, a pin icon should appear in the global search input field. This design pattern is inspired by the behaviour of a caps lock indicator (which . Ideally, this would help visitors realize that the filters were “pinned” to the search.  Other recommendations included adding visual feedback by changing the colour of the active filters when the pin is toggled on and off. 

Above: When the pin tool is active, an indicator appears in the search input field. This pattern is inspired by the caps lock indicator and helps people know that the pin toggle is currently active and related to search functionality.


See for yourself!

Check out the pin tool in action on the Edmonton Public Library website. This link will demonstrate a search result with the pin tool activated, but feel free to use the toggle and filters to generate your own search.