Methods of UX Research – Overview


..Method of ux research: user testing


Further to my previous post “What is UX? What are UX Research and Design?“, here are some notes about the second part of the MOOCIntroduction to User Experience” featured by the edX platform. This post introduces some methods of UX Research, like User testing and Micro-usability test. There is also a quiz with 10 questions to assess your knowledge about this topic.

You can jump right to the two other posts related to this MOOC:


1) Basic Methods of UX Research

3 basic methods of UX Research:

  • Ask (people what they do and what they think)
  • Observe (people what they do)
  • Inspect (prototypes and artifacts)



  • interviews = conversations with stakeholders to understand aspects of their experience
  • surveys = questions distributed to lots of people to elicit attitudes, behaviours and characteristics
  • focus groups
  • diary studies
  • experience sampling



  • ethnographic observations = watching people engage in activities to understand how they go about them
  • user testing = watching people perform scripted tasks to see if a system supports them
  • usage analytics = analyzing large scale traces of system usage to understand patterns of use
  • video analysis
  • social media mining



  • guideline-based = comparing a system design against known best practices to find probable flaws
  • walkthroughs = stepping through an interaction sequence with a “users-eye” view to find probable breakdowns
  • comparative analysis = systematically comparing a design with similar designs to identify strengths and weaknesses


Combo: Watch and Ask

  • user testing = task observation is typically accompanied by interviews
  • contextual interviews = ask questions while observing “natural” activities take place
  • artifact-based methods

When to use what?

Ask when…

=> observation infeasible (infrequent, long, private)

=> values and motivations are key

=> (surveys) large numbers and high certainty are needed


Observe when…

=> self-report will miss information (memory, tacit knowledge)

=> process and communication are important

=> (analytics) – large numbers and high certainty are needed


Inspect when…

=> you have a product to inspect

=> interacting with users is too expensive or cumbersome


The process of assess, design and build

The process of assess, design and build


2) User Testing, part 1

User testing is the core method in UX Research.

What is User Testing?

This is watching representative users try to accomplish important tasks using a product.

Why User Testing?

a) You learn a lot from watching people use a system:

  • what works and what doesn’t
  • why things work and don’t
  • user needs you missed

b) Why not just use your own experience?

  • you know too much
  • you know too little

Basic idea

  • find potential users
  • ask them to do some stuff (tasks)
  • observe
  • ask some questions (debrief)
  • write down what you learned

Potential users

  • people who fall within the target audience:
    • attitudes
    • behaviors
    • characteristics
  • not current users
    • OK if current users of system but not for selected tasks




Choosing tasks

  • things that most users need to do – “Buy a book”
  • more difficult things that some users need to do – “Buy a hardbound book by a Nigerian author that was published this year” (more specific)

Choosing tasks

  • Close-ended tasks:
    • have a clear end point
    • have a verifiable outcome
    • follow an expected path

=> Example: “Buy a hardbound book by a Nigerian author that was published this year”

  • Open-ended tasks:
    • allow user to judge when complete
    • may not be verifiable
    • allow following alternate paths

=> Example: “Find some books you might like to read on vacation”

Which are better?

  • closed-ended:
    • less natural
    • control for motivation
    • control for interpretation
    • assess success
  • open-ended:
    • more natural
    • varying motivation
    • varying interpretation
    • how to assess success?

=> Use both

Task sets

  • progress from easier to harder (to avoid frustration and encourage users)
  • cover a range of critical task types (browse, search, buy)
  • can include open-ended and closed-ended tasks
  • be careful to avoid “ordering effects”


3) User Testing, part 2

How to explain to your participants what it is that you want them to do?

A) Task wording

Don’t lead the witness (do not provide clues about how to accomplish the task)

Examples on Amazon:

  • not good – “put 3 books in your shopping cart, then purchase them using Standard Shipping”
  • better – “Choose 3 books and buy them, making sure they can get here by next Wednesday”

Avoid ambiguity in the task instructions:

Task wording: avoid ambiguity in instructions

Include context and motivation where needed:

Task wording: context motivation


Include information about how and when to stop:Task wording: how and when to stop


Pilot test!Task wording: pilot test

B) Think aloud protocol

Participants say (out loud) what they are thinking.

Thinking includes:

  • looking for something
  • reading text
  • hypothesizing about how the system might work
  • interpreting system options
  • interpreting system feedback
  • explaining decisions
  • feeling frustrated, happy


Thinking aloud has advantages:

  • Hear how the user thinks about the task
  • Learn what the user actually sees and notices
  • Hear how the user interprets options, feedback

Thinking aloud has also disadvantages:

  • Timing will not be realistic
  • Attention to detail will not be quite realistic
  • Need to determine “rules of engagement” for questions, mistakes, etc.


4) User Testing, part 3

A) Debrief (after tasks)

  • Review problems, get more information
  • Ask about usefulness, value
  • Ask about perceived usability, aesthetics, credibility
  • Compare to known alternatives


B) Making sens of the test

=> What did you learn from this test?

Capture “critical incidents

  • errors
  • expressions of frustration
  • breakdowns
  • pleasant surprises

Assess success / failure

  • usually a spectrum

Capture overall reaction, and reaction to specific aspects

Link incidents to success / failure and to subjective reaction


C) Learning from the test

Quick! Write it down right after the test!

Critical incidents, and when they happened

  • mental model mismatches
  • misinterpretations
  • invalid assumptions made by the system
  • missing user needs
  • too little flexibility
  • too little guidance

Problems: assess the severity and its impact on:

  • success / failure
  • subjective experience
  • product goals

Other UX factors

  • usefulness
  • desirability
  • credibility


D) Other very important things

  • Participation is voluntary
  • Participants can stop at any time
  • You are testing the system, not the participant
  • You need to let the participants know that their participation is voluntary


E) What is “Micro-usability” test?

Relaxed recruiting

  • people close enough to target audience to be able to imagine
  • A.k.a. “Hallway” usability test: the idea, especially in a workplace environment, is you just pop out into the hall, you grab whoever’s out there, have them come in and try your thing out and give you quick and dirty feedback on what it is that is working and what isn’t working

Fewer tasks

  • the test should last less than 30 minutes (including 2-3 tasks) rather than 60-90 minutes (5-10 tasks for a “normal” test)

Little or no data collection

  • no recording
  • no questionnaires
  • no logging

Off-the cuff analysis


5) Quiz

You can now test your knowledge with this 10 questions quiz related to methods of UX research.

Then, you can jump right to the two other posts related to this MOOC:

3 thoughts on “Methods of UX Research – Overview”

Leave a Reply

Your email address will not be published. Required fields are marked *