3 . Observe your Users, Don’t Just Survey Them

Surveys Are not Enough and Do not Tell You the Whole Story

Kayla Heffernan - Banner - Woman looking through Binoculars

By Kayla J. Heffernan

Six years ago, I introduced the concept of Quantum UX…

When you are a doctor people will show you things and ask your opinion (is this mole cancer or not?). The same thing happens when you are in UX – people ask if their interface is good or not.

“Good” is subjective, but generally what they are really asking is: is it usable? This is when I introduce them to the concept of quantum UX.

Your interface is both excellent and horrible, usable and unusable, at the same time. It is not known which until it has been tested with users and observed.

Only once you test with the real users of your website can you know whether or not it is useable to them – which is all that is really important. If I, an IT educated female, can use it that is great, but if your audience is elderly men who did not grow up with technology it’s a whole different story.

What is usable for me may not be useable for them. The only way to truly answer this question is to open the box and observe the state of the cat.

Today, I still think that Quantum UX is a valid concept and, in fact, a great way to introduce this new article: Observe your Users, Don’t Just Survey Them.

Kayla Heffernan - 1 - Wall comparison

Sometimes it feels like UX Research has become synonymous with surveys. They are relatively cheap, easy and can reach large amounts of users. Sounds great, right?

Wrong. Surveys are not as easy to design as people assume. If you ask biased or leading questions, you cannot trust your results. My friend and former colleague, Mimi Turner, wrote an excellent series about this*.

Beyond possible biased results, you still may not get the full picture from a survey only. If you ask people how easy or difficult they find a task (or using your product) their answer may not be the same one you would draw if you observed them completing the task. This point can be illustrated with the exaggerated example of labour and delivery.

Labour and delivery


If you watched someone (without an epidural) go through labour and delivery you would probably observe they are in a great deal of pain. If you saw the same person shortly after birth they may seem pain-free and possibly overjoyed.

This is because of the oxytocin released by the body and the fact that they (hopefully) have a healthy and loved baby (the halo effect). If you ask the parent how bad childbirth was after the fact, they may downplay the amount of pain they experienced at the moment.

Let’s take this example to the extreme, and say that we can only administer a survey and not observe any labouring and delivering people:  We would determine that childbirth does not hurt that much.

The same is true of user research. I have observed participants not complete tasks but still rate the task as ‘very easy’, 7 out of 7, on the Single Ease Question – SEQ (a 7-point scale to assess how difficult or easy users find a task; in usability testing, it should be administered immediately after a user attempts the task).

I have also seen them struggle with every step of the process, only to state that a task was very easy.

Does this mean that people are lying?

Kayla Heffernan - 2 - Pinocchio

No. This discrepancy happens due to a combination of factors (including impression management whereby the participant wants you to think they found a task easy and knew what they were doing because they are smart and capable).

Two factors in particular are:

  • The reflecting self sees things differently than what the experiencing self encountered at the time.

  • The Dunning Kruger Effect in practice.

1.  The experiencing self versus the reflecting self


What one experiences at the time is referred to as the “experiencing self” (Zajchowski, Schwab, and Dustin, 2016). Reflections on experiences after the fact, done by the “remembering self”, lose the nuance of in-the-moment understanding (i.e. the experiencing self during labour being in a great deal of pain vs the remembering self downplaying it).

The recency effect can cause a different interpretation of what is remembered, than what the experiencing self encountered at the time. That is, retrospective information from people who are using your product without issue, or who have completed a task, may minimize difficulties encountered. If they have persevered through difficulties, they may not even recall that they experienced them at the time.

While Kahneman (2011) concludes the remembering self is the dominant means by which lives cultivate meaning, the experiencing self has implications for the technologies that are continued to be used (or abandoned) and for usability studies.

2.  Dunning Kruger Effect


The Dunning-Kruger effect is often used to speak about people being ignorant of their own ignorance. More specifically it is a cognitive bias where people assess their abilities to be greater than they are. This is because people are not good at evaluating their own competency (Dunning, 2011).

Participants are assessing how easy a task was using the same ability and skills used to complete the task. If they think they finished the task successfully, it makes sense that they then rate it as “easy”.

You need to observe your users, not just survey them


These two factors highlight why we need to observe users interacting with our products, not just ask them to complete surveys.

The experiences recalled may not be what was actually experienced at the time. Further, while they may tell you something was easy, if you have watched them struggle through the process you can identify possible usability issues and opportunities for improvement to reduce user frustrations (either those that are unexpressed or frustrations that a wider audience may face).

Doing so will improve your product by reducing pain points and increasing usability.

*  KEY POINTS from Mimi Turner’s series

  • There are five key things you need to consider before you begin writing a survey:
      1. What are you trying to find out?
      2. Who are you targeting?
      3. How do you plan to deliver the survey?
      4. How are you going to analyze and interpret the data?
      5. What quota(s) do you need to meet?
  • Always preview your survey and get people to test it before it goes out to the intended audience. There may be aspects you have overlooked.

  • It is important to remember to cleanse the data before analyzing it. Look out for cases when respondents have skipped questions, entered nonsense responses, misunderstood the question, completed the survey too quickly compared to the average; and clicked responses in ‘patterns’ (revealing a lack of genuine engagement). These responses need to be eliminated from your analysis.

I am Kayla J. Heffernan and these are my agile-thoughts

2021 © Melbourne, AUSTRALIA by Kayla J. Heffernan

Kayla Heffernan - Headshot - agile-thoughts author

Kayla is a UX Designer and Researcher with over a decade of experience. Her passion is solving ambiguous problems with accessible and inclusive solutions, particularly in the health space.

I am passionate about research, not just pushing pixels: UX goes beyond the screen.

Kayla has recently submitted her PhD dissertation, which she completed part time, and is enjoying catching up on 6 years of lost weekends…or at least she planned to but instead ends up watching Netflix and cuddling with her 2 cats on the couch.

Interesting, isn’t it?

LinkedIn
Facebook
Twitter
Scroll to Top