top of page

Product designer

  • Gabe Orlowitz

A Quick (but not dirty) Approach to Usability Testing



By answering a few common questions and exposing some frequent mistakes around usability testing, I argue that no matter what your role is in building software, you can and should conduct some form of usability testing.


It’s not hard, and the value is immense.


Without a doubt, a UX professional will do a superior job than the DIY tester for whom this article is written. Whenever possible, please do make use of a usability expert.


But the reality is, in most organizations, there’s far more production code that has not been user-tested, than the amount that has. We on the UX and UX Research team can't test every single feature, workflow and widget before it ends up in the hands of users.


The users of our products pay for this. But more importantly, we as a company end up paying for it big in the form of churn and unhappy clients.




First, what is usability testing?


According to Steve Krug in his book “Rocket Surgery Made Easy,” usability testing is:


“Watching people try to use what you’re creating/designing/building (or something you’ve already created/designed/built), with the intention of (a) making it easier for people to use or (b) proving that it is easy to use.”


A key concept here is that it involves watching people actually use things, which separates this technique from things like surveys, interviews, and focus groups, all of which involve asking people for their opinions or past experiences.


It is this element of actual use that will produce the results needed to fix the major usability issues in your software.

You don't promise someone a way to the top floor by giving them a broken elevator.

Why does it work?


All websites and applications have problems, many of which are business critical. Have you ever encountered a website that worked flawlessly? And while the more mature sites probably have fewer problems, they certainly have their fair share (Amazon and the G Suite are no exceptions).


The most serious problems tend to be easy to find by actual users. When you design and build something, you know how it works, making the usability issues less obvious. Users, however, do not know how the software works, and therefore uncover the issues in plain sight. You just have to watch.


It forces us to design for the people who actually use our software. Without exposure to real usage, we tend to design for an abstract concept of the user, which in most cases is based on ourselves or our own opinions. This is wrong, and always leads to problems.


"Usability testing informs your design intelligence, sort of the way travel is a broadening experience.” - Steve Krug


What do you test?


Depending on what you have designed or built, that's what you test. That goes for napkin sketches, wireframes, or production code.


If you already have an existing site up and running that you're planning to redesign, start by testing that. You'll learn about the most pressing usability problems after just a few sessions with real users. You'll also learn a whole lot of things you didn't know about how people actually use your site or application, not how you think they do.


You could also leverage other people's sites and have users test those. In doing this, you'll learn from what others have done in terms of what works and what doesn't.

Furthermore, you can:


  • Do the napkin test: These aren't full tests, but they're great for asking people what they make of a concept. You ask them to try to figure out whatsomething is, not what they think of it. For example, you could show a rough sketch of a common workflow for that user. If they have no idea what it's conveying, you've learned you're on the wrong track in a matter of seconds. Imagine never having done that napkin test, then spending months to build out that feature? Sounds crazy, but it happens all the time.

  • Test wireframes: These are schematic diagrams of a page, essentially showing where things go and how information could be presented from an information architecture standpoint. Here you test the find-ability of things, the naming convention, and overall organization of your site.

  • Test prototypes: Here you can dig into actual workflows and interactions, and ask people to vocalize their thoughts as they work through a task. The more interactive your prototype is, the more participants will be able to comment on and share their insights.


When do you test it?


As early as possible. It's very easy to detect serious usability problems early in the process, even if you have little to show. This is also a far less costly option, compared to building the site out with problems already embedded in it.


The worst practice of all is waiting until the site is done and ready to launch. You may think the team will find, and fix, the most pressing problems right after launch, but ask yourself, how often has that happened? Start earlier than you think makes sense.



Who do you recruit for testing?


The main thing to avoid is testing with people in your organization (unless you're testing something that's used internally). While tempting, your coworkers know too much. Anyone who works on what you're testing, whether they build it, support it, train it, or document it, is out of the question.


However, you can try to find people who know very little to nothing about it, perhaps those outside of R&D, or new hires. If you're having trouble finding participants in person, remote testing is always an option.



But, isn’t it an unnatural environment?

Can people really accurately convey their impressions about an experience?


Sure, it’s not a real world situation, but usability tests almost always produce useful and actionable insights about the thing being tested. Try it for yourself! You’ll either (a) learn what can be improved or (b) learn that what you’re testing is completely usable and you can move on. I can almost guarantee that you'll find something to improve.



How many users do you test with?


This is an age-old debate, and there are different answers depending on the test. For you, the DIY tester, the answer is 3.


3 is enough because you should be more interested in uncovering only the problems that you can fix, not necessarily all of the problems. The first three users are likely to encounter many of the most significant problems related to the tasks you're testing.


According to Jakob Nielsen of the Nielsen Norman Group, you start to see diminishing returns after 5 participants. If you have the time and budget for 10-15 tests, instead of doing all on the same design, do several rounds of testing with different participants, iterating on your designs after each round.

When in doubt, remember: Testing even with 1 user is 100% better than testing with none.



Once you test with 5 users, the returns start to diminish. Instead of testing a design with 10 or 15 participants, you're better off stopping after 5, iterating on your designs, testing again with another 5, and doing that in rounds (Full article here).



Okay, so we’ve found a few issues with our website. Now what?


Aha, the purpose of testing of course isn’t just to find the problem. You find it so you can fix it!


You want to go after the most serious, pressing things you observed in the testing. You don't have time, or the resources, to fix everything. That's okay. That's not the point. Here are a few helpful questions for determining which of the problems you uncovered during testing you should fix.


Determine Severity:


  • Will a lot of people experience the problem?

  • Will it cause a serious problem for the people who experience it, or is it just an inconvenience?


Then ask:


  • "What's the smallest, simplest change we can make that's likely to keep people from having the problem we observed?"


You might find yourself resisting this question, but don't. You want to make it better for your users right now. You're not necessarily looking for a permanent solution. The diagram below shows a few questions you can ask as you look to fix things.




Don't add. Remove!


When it comes time to tweak, start by taking something away, not by adding something. Oftentimes the problem lies in the fact that there is too much on a page that the user doesn't need. When you have an instinct to add something, always question it.


Should you find yourself craving more and ready to run your first usability test, here are some helpful resources to get started.



 


Special thanks to a few well-known, trusted sources who inspired this article, notably usability gurus Steve Krug and Jakob Nielsen.

Resources (the older ones still ring true today)


"Rocket Surgery Made Easy" - Steve Krug, 2010

Comments


bottom of page