Why we don’t do mobile usability tests (and neither should you)


At DesignCaffeine, I don’t do mobile usability tests in the middle of my design process. In my experience, mobile usability tests as they are popularly conducted are a waste of time and resources and in vast majority of cases fail to lead to creation a better mobile product.

Instead I conduct RITE (Rapid Iterative Testing and Evaluation) studies: the only methodology that I’ve actually experienced in the real life yielding more delightful, usable and successful mobile products in less time.

What is the difference between the usability test and a RITE study you might ask?

Usability tests, as they are popularly run, involve testing by 8-10 participants of a fairly elaborate prototype using a set of pre-defined tasks in a laboratory setting. There are minimal prototype changes during the study and at the end of the study a usability report is produced outlining issues and recommendations.

In contrast, a RITE study I typically run is conducted using 9-12 participants in 3-4 rounds, with 3 people per round. The critical difference is that in between the rounds I allow time update the prototype to fix the issues discovered during the previous day’s testing. To enable that, I usually employ the simplest possible mobile prototype for the job, usually paper or linked HTML.

Those practitioners that love usability tests might object that in essence, a RITE study is just like 3-4 smaller usability tests strung together, and that the differences between the two are just semantics.

I disagree.

Usability tests come to us from the processes of developing digital products and software for mature platforms of Internet and Desktop. On the desktop, people tend to perform predefined, complex tasks, while sitting down, using keyboard and the mouse.

Mobile is fundamentally different. It involves moving around, interacting with the environment, taking in a whole bunch of environmental inputs like GPS, Voice and Camera feed, all the while multi-tasking and operating the device one handed, using a fat meat-pointer.

Doesn’t it make sense that mobile teams should update the existing user-centered methodologies to meet the demands of this new platform?

Let me explain what I mean, in the following 3 points, comparing mobile RITE studies and usability tests (as they are typically conducted).

#1: Mobile usability tests encourage project managers to treat user-testing as something to check-off, not lead the design process.

In my experience, I find that most companies typically view mobile usability tests as an optional, expensive undertaking. There is a good reason for this, as mobile usability tests run by a third party contractor cost anywhere from 10-20K per round (plus facility fees, participant fees, prototype creation costs, etc.) For that reason,

Organizations usually end up waiting until the design is fairly well baked to conduct usability tests.

This in turn encourages project managers (and other team members) to misunderstand the whole purpose of the exercise and to treat usability tests as some sort of an elaborate QA process.

Testing late in the game is not effective in helping the team bring about a better product. The entire point of the usability testing is to improve the design. Instead, it is conducted too late in the process to affect the very thing it is supposed to fix.

While mobile design itself is usually fairly simple, most issues come up are more fundamental, deeper and wider in scope than typical web project. For example, information architecture could be confusing. Or flows too long and badly sequenced. Too many form fields or confusing form interactions. Too much text. Confusing screen names. Flows that try to accomplish too many tasks. Flows that try to mimic web. Ignoring one-handed meat-pointer software ergonomics. Jack-in-the-boxiness. …and these are just some of the mobile-specific issues that can befuddle potential customers.

Unfortunately, by the time the usability test is conducted, many of these deeper fundamental issues are already “baked into” the design and can’t be changed.

This is the fundamental draw-back of conducting mobile usability tests and one of the main reasons I don’t do them in the middle of the design process.

Instead, I conduct light-weight Agile RITE studies.

In contrast to a typical usability test, a RITE study is conducted as early as possible as part of the design process, and is not a test. Even the name is purposefully different: test vs. study.  Study implies something will be learned as part of the process so that the mobile digital product is given a chance to evolve to a better state, even if it involves changing more fundamental aspects such as IA and flow sequences.

#2: Mobile usability test prototypes are often too rigid for demands of mobile.

Typical mobile usability findings report is presented up the food-chain and quoted many times over, encouraging elaborate video-taping contraptions and creation of the costly hi-fi prototype, because the design is fairly baked at that point and “in case an executive might want to stop by” so the test needs to “look good”. Furthermore, many usability test moderators tend to demand this. There is general erroneous industry perception that the prototype needs to match closely the transitions and visual design of the final product. Fixing an elaborate Flash or worse yet, hand-coded dynamic prototype becomes costly and complicated, and the main purpose of the user-centered process is irretrievably lost.

In contrast to a typical usability test

My mobile RITE prototypes are suitably rough, reflecting the overall degree of completion of the product.

I rarely build Flash or hand-coded prototypes, because the cost/benefit ratio is just too high. Instead, I find that paper, specifically post-it notes or flashcards (more on our testing methodology in my Free Webinar: Agile Mobile Design – Why we don’t do mobile usability tests (and neither should you)) allows me and the client team to model most interactions effectively. Simple paper prototypes can even be used to effectively model more complex mobile and tablet design elements like transitions (see Storyboarding iPad Transitions) Most importantly, simpler paper or linked-HTML prototypes allow designers to quickly and inexpensively explore multiple design approaches, while dispensing with elaborate camera equipment and other gadgets.

At the same time, study participants can be comfortable brainstorming valuable ideas, which can actually be incorporated in the prototype because the design is not yet finalized. Rough prototype also allows many changes to the prototype on the fly, sometimes immediately after the first participant is done, and before the next evaluator has a chance to see the prototype.

#3: Mobile usability tests are focused on reports, not solutions.

Usability tests (as they are typically run) produce reports.  These reports contain vivid descriptions of usability issues and best practices designed to help designers work around the issues.

The problem is that mobile is just too young to have much in the case of solid best practices. Instead, the best that can be said after a given mobile usability test is that “Facebook mobile app does it this way” or “Amazon mobile website does it that way”. Thus

Mobile usability test recommendations are typically perceived internally within the organization as adversarial, because everyone has their favorite app and no one can agree which pet design pattern should be followed.

These types of recommendations are also not helpful in creating design solutions, because there are likely to be many mobile designs that can work better than the recycled standard fare for your specific customer and situation at hand.  In a rapidly evolving mobile industry, designers need space to explore creative solutions.

And the best way to do that, is to

Focus on getting rapid feedback on what works.

A RITE study provides such a creative space and rapid feedback because it is focused on a solution. For this reason, I rarely video-tape my studies or provide elaborate reports. At most, I show what the design change progression during the study looked like, and insights we, as a team, gained along the way to arrive at the present improved design.

Instead of the usability report, the product of the RITE study is the improved design solution. Thus,

As a valuable fringe benefit, RITE helps build effective mobile design teams.

Because RITE methodology is focused on solutions, it tends to be less adversarial.
Have a cool idea? Let’s try it. Right Now.

RITE approach is inherently Agile, making it a perfect fit for the Agile/SCRUM projects.

When I work with the client’s team, I request that at least 80% of the entire project team is present 80% of the time during the RITE study, so there is little need for a report. The entire team is there — driving and experiencing the design process together, in real time. Most importantly, during our typical RITE study, everyone on the team is focused on coming up with creative solutions to touch mobile problems.

Not on producing reports.

Usability studies and elaborate reports have their place. But the middle of a mobile design process is not one of them.

When it comes to mobile usability testing, don’t keep doing what you’ve always done for the Internet and Desktop software products. Instead, do the RITE thing.

Need a little help?

Need to implement Agile or RITE testing methodology in your organization? Or simply want to try RITE out on your next mobile project?  I might be able to help. Set up your free 30-minute consultation here

Greg Nudelman

P.S. Like what you are reading? Go VIP.

4 UX Books (opens in a new window)
Join 6,000+ subscribers getting exclusive content, Q&As, book giveaways, and more. No spam. Just design that works.