Understanding User Experience Bias and Why You Believe in Horoscopes
Part 1: Researcher Bias
Ever wonder why your fortune cookie knows what’s in store for you, or how your horoscope so uniquely describes your personality? Without ruling out the merits of witchcraft and wizardry, I’d like to introduce you to a concept called bias. Bias, a word that makes every researcher cringe, is actually the brain’s solution to navigating the endless influx of stimuli in our environment. When faced with decision-making overload, our brain uses biases as a shortcut to guide us through our lives more seamlessly. Bias can be very helpful in this way — clueing us not to walk down a dark alley, or alternatively, providing an unfounded sense of optimism that motivates us throughout the day.
Bias can also create misinformation, as is the case with horoscopes, and can have serious business implication as it pertains to user experience research. Business ideas and designs have prospered and perished at the hands of research. Well-founded studies have supported and molded the world’s best designs, while flawed research has lead designers astray, sinking endless costs and time into pursuing the wrong business direction. To ensure meaningful results, research and analysis must be handled in as unbiased a manner as possible. This bias can stem from either the researcher in the study or from the participant.
In part one of this series, we’re outlining three major biases and offering solutions around how user experience researchers can avoid them.
Confirmation bias is our natural inclination to seek out or pay greater attention to information that supports our beliefs, and discount that which does not. It is this type of wishful thinking that brings us back to our original example. Confirmation bias allows horoscopes and fortune cookies to weave their magic, defining our inner-workings and “predicting” our future events. For instance, your horoscope may read, “Today you will encounter a situation in which you will receive good advice from a friend.” Though it may be completely normal for a friend to share their recommendations, confirmation bias would prompt seeking out or identifying this as being consistent with your fortune, giving credence to that little slip of paper in your cookie, or confirming your belief in horoscopes.
This is a particularly dangerous territory for user experience researchers, as there is much opportunity to develop personal opinions around designs or based on prior interviews. If a researcher favors the design of the first prototype over the next, confirmation bias may cause him or her to give more weight to the positive opinions of the first design, or to subconsciously disregard negative feedback. Confirmation bias is especially tricky in that it often leads to the observer-expectancy effect, where the researcher’s expected outcome causes them to mold their questions and behavior in a manner that steers participants toward the researcher’s anticipated result.
To avoid confirmation bias in testing, here are a few best practices:
- Identify any opinions or assumptions you have going into testing. Ask yourself whether your analysis was confirming those ideas and whether the data would read the same if you felt differently.
- Ask yourself which pieces of information you readily accepted, and those you skimmed.
- Make sure your sample size is large enough to provide a greater pool of evidence.
- Write a discussion guide beforehand and stick to it closely to avoid “leading the witness.”
In UX research, vast amounts of subjective data is collected, sometimes even multiple hours of interview content per participant. The brain’s capacity to absorb this information is limited, and can easily default to the “shortcut” bias called the recency effect. This effect causes researchers and non-researchers alike to give more credence to what they heard most recently, versus what they heard most often. Essentially, the most recent information sticks out more in one’s mind, giving it a greater weight than the previous data.
The recency effect can greatly skew the analysis of data based on the last interview performed or the most recent piece of information synthesized. Let’s say that a researcher is performing ethnographies with customers to test the effectiveness of a pizza delivery app. The last customer raves about the pizza delivery status bar, but hates the “customize your pizza” function. If the recency effect comes to bear, and the researcher leans too hard on this interview, the client may scrap the “customize your pizza” function and lose out on an engaging, differentiating app utility.
Here are some ways to avoid getting caught in the recency effect:
- Analyze data in a different order than that of your interview sequence.
- Devise a process for data analysis before testing that gives equal weight to all responses.
- Create a debrief document. Doing a standardized debrief with colleagues after an interview gets the information down while it’s fresh, and provides a succinct reference to shuffle through when you’re on information overload.
This recently-defined bias is less generalized to our everyday, and rooted more specifically in user experience research. Defined in the International Journal of Human-Computer Interaction, this bias is the effect that takes place when a user knowsthat a task can be completed, due to the mere prospect of being asked to do it. Researchers in the UX field would not request a user to do a task unless it was possible to complete, thus clueing the user that the feature is available and findable on the page. The problem here is that when a user interacts with a website, they do not assume functionality that is not readily apparent. If a clothing retailer offers a virtual reality dressing room but doesn’t advertise that in plain sight, the average user would never think to look for the capability and may quickly lose interest. In this vein, if a researcher instructs a user to engage a drag and drop function or identify the notification center, they are biasing the user by indicating that these functionalities exist.
Giving task-based questions is an inherent component of user research, but here are a few ways to elicit the type of feedback that can otherwise be compromised by the task-selection bias:
- Ask open-ended questions. When a user lands on a new page, consider asking, “What stands out to you here?” or “What would you expect to find in this section?”
- Get general feedback. Asking what features a user would hope to have and where they would expect to find them gives helpful feedback without indicating what’s already available.
- Engage in out-of-the-box activities. You can include creativity-prompting exercises such as having respondents draw out or describe their ideal webpage design and functionality.
Ultimately, biases are nearly impossible to avoid, and can actually benefit our daily lives and routines in many ways. However, by keeping just a few simple practices in mind, you can make your UX studies as unbiased as possible, increase the value of your research, and provide the best quality data to support or negate the most important of business decisions.
Stay tuned for part two of this series, where we’ll highlight participant biases and how to reduce them. If you have research needs for your company, reach out to firstname.lastname@example.org.
About the author
Leah is a Senior Experience Researcher at Ogilvy. With a background in market research and a Master of Science in Applied Psychology from the University of Southern California, Leah employs user-centered design research with a focus on uncovering trends and identifying the intangible thoughts and behaviors of the customer.