Before software usability testing emerged in the 1980s and became more widespread in the 90s, quality assurance testing was the only way software was tested before release. Quality assurance was focused on ensuring that developed software met the defined requirements and did not contain any defects. However, requirements were rarely generated through user research. Instead they were usually generated by business stakeholders and documented by business analysts. If users were involved at all, they were often represented by a few subject-matter experts gathered in meetings to talk about what they wanted the software to do.
Quality Assurance Testing
Quality assurance analysts tested the software to find technical defects. If the software did what it was supposed to do according to the requirements, it passed the test, regardless of how usable it was or how well it fit the needs of the users.
User Acceptance Testing
“Users” and stakeholders were also involved in user acceptance testing, which was really just a sign off that the software did what it was supposed to do. Occurring at the very end of the software development process, it was difficult to make anything but very minor changes at that point. As long as the software did what it was supposed to do, stakeholders were willing to overlook usability issues. It seemed easier to write off any problems as things to address in training, rather than to require additional development work.
The Evolution of Usability
As it became obvious that this method of software development was failing to address usability issues, usability testing was added to projects in a similar manner as quality assurance, evaluating software at the end of a project to find and fix usability problems. As it became apparent that this was too late to make any major changes, usability testing gradually moved further and further forward in the design process, with multiple iterations of design and testing. Eventually people realized that it would be better to avoid problems in the first place by finding out what users really need at the beginning of the project. Proper user research was born.
We Forgot the End of Projects
Unfortunately, as we’ve moved user research and usability testing earlier in the process, we’ve tended to overlook the end of projects. We do the upfront user research through iterative design and usability testing, but once development begins, we often drop out and move on to the next project. Then when the final product is released, we often find ourselves scratching our heads, thinking, “what happened?” as we see it varies greatly from what we intended.
All kinds of things can happen between the final design iteration we test and the final coded interface. Without being involved in checking the final design and development, problems tend to slip through.
Usability Review
To solve this problem in a previous job, I created a process I called a Usability Review. During functional QA testing, a usability analyst would review the developed application looking for usability and design problems. Any issues found were entered in the QA bug tracking software (Test Director in this case) as usability or design issues and assigned to a developer to review and fix. After fixing or rejecting the problem, the developer assigned the issue back to the usability analyst to either accept or reject the solution. This worked very well and caught a lot of issues.
Advantages of a Formal Usability Review
You may think that staying involved throughout the project would be enough to find and prevent any usability and design problems, but there are several advantages to having an official, detailed usability review process.
- As an official step in the project, the usability review gets added to the project plan, ensuring that it will actually take place and that someone will let you know when the application is ready for review. Without that official task in the project plan, it’s easy for others to forget to notify you, and when you’re busy on other projects, it’s easy for you to forget also.
- A usability review requires you to examine the application in detail, rather than giving it an overview. A detailed examination tends to find more problems and gives you a more realistic sense of how well it will work for the users.
- Entering the issues in the QA bug tracking software gives usability and design problems the same importance and status as QA defects, makes someone responsible for fixing them, and gives you the power to approve or reject the solution.
Can’t QA People Perform the Usability Review?
Can’t Quality Assurance people find usability and design issues themselves, or can’t they be trained to do that? Yes, it’s possible, but usually they don’t have as much knowledge and experience in user experience issues, and they are not usually involved in the user research and usability testing that takes place earlier in the project. Usability and design professionals are the best judges of whether the final application matches the intended design and user experience.
When you do a usability review, you’ll often find issues that are QA defects, and the QA analysts will often find things that appear to be usability or design issues. A good way to coordinate efforts is to enter technical defects and assign them to the QA analyst to assess. The QA analyst can assign any usability or design issues that he/she finds to you to assess.
So add a usability review to your projects and you’ll find that it pays to keep usability and design formally involved all the way to the end of a project. You’ll end up with final products that more closely match the original vision.