As UX researchers, we tend to focus more time on explaining our findings than in providing our recommendations. Yet, however well we explain the findings and recommendations, there comes a time when we’re not present, and the people who have to implement the recommended changes have to rely on the written recommendations and what they remember from your explanation. So it’s very important to ensure that your UX recommendations are understandable, concise, specific, believable, authoritative, actionable, feasible, flexible, prioritized, and easy to review. I provide advice on how to provide better recommendations in my latest article on UXmatters:
Any user research is better than doing no user research, right? If you can’t reach your target users, you can do research with your company’s employees, because they’re kind of similar right? If you can’t visit people in person to see them perform their tasks, maybe you can do phone interviews or send out a survey. That’s better than nothing, right?
The truth is that it’s sometimes better not to do any user research than to do half-assed user research. I’m not saying that you always have to the perfect user research conditions or its not worth doing. In reality, we rarely have all the time we need and the perfect circumstances to conduct extensive user research. So it’s understandable that we sometimes have to cut corners and make do with what we’re able to get. However, there’s a fine line between discount user research and half-assed user research.
The danger is when you always cut corners, you can become an enabler. Your shortcuts become the norm, allowing your company to check off the user research checkbox, allowing them to say, “Yes, we do user research.” If you can’t eventually convince them to devote more time and effort to user research, sometimes it’s better to practice tough love and let them fail by not doing any user research, rather than allowing them to rely on poor quality research.
In my latest UXmatters article, I provide advice about how to know when you’re practicing half-assed user research and how to improve. Check it out: Avoiding Half-Assed User Research
Image by Spider.Dog
I just published a new article on UXmatters, Avoiding Common Prototyping Mistakes. This topic came from my repeated experience making these mistakes when prototyping. There are so many great, new prototyping tools out there. It seems new tools are popping up every week.
The great thing about these new prototyping tools is that they make it so easy to create realistic looking and interactive prototypes. However, the problem is that it’s very easy to get carried away by trying to show too much in the prototype. You think, “I’ll show how this works. Well then I guess I might as well show this too.” The next thing you know, you’ve spent hours creating something really impressive but really complicated.
In this article I discuss and provide solutions for these six prototyping problems:
- Jumping too soon into prototyping
- Failing to plan what to prototype
- Prototyping at the wrong fidelity
- Getting carried away by creating too much
- Failure to explain types of prototypes
- Not creating a guide for navigating the prototype
Check out Avoiding Common Prototyping Mistakes on UXmatters.
That’s the title of the article I just published on UXmatters, in which I give advice on how to soften the blow of delivering bad news to clients. Let’s face it, when we perform an expert evaluation, usability testing, or user research on an existing product – most of what we find is problems with the current product. Clients don’t pay us to tell them how great their products are. If they’ve hired us, it’s to find problems that can be fixed. But there are ways to make it easier to deliver bad news. In this article I provide the following advice:
- Get the stakeholders to admit that it’s ugly first
- Get everyone to buy into your research methods upfront
- Encourage stakeholders to observe the research
- Blame the bad news on the participants
- Back up your findings with metrics
- Present recordings and quotations
- Don’t beat your audience over the head
- Emphasize your expertise
- Back up your findings with examples of best practices
- Show your stakeholders they’re not alone
- Position it as providing recommendations, not pointing out problems
- Mention the positive aspects too
- Deliver your findings in person
- Prioritize the problems they should solve
- Provide a plan for addressing the problems
You can find more details about this advice in my latest article, Your Baby is Ugly.
Have you ever wondered what qualities you need to succeed in user research? I just published an article on UXmatters, Qualities of Effective User Researchers, which lists the following qualities that lead to a successful career in user research:
- Ability to Learn Quickly
- Organizational Skills and Attention to Detail
- Time Management Skills
- Mental Agility
- Flexibility and Adaptability
- Good Memory
- Effective Notetaking
- Analytical Skills
- Problem Solving
- Design Skills
- Writing Skills
- Communication Skills
This may sound like an intimidating list, but you don’t have to be perfect in all of these areas. Check out the full article on UXmatters – Qualities of Effective User Researchers.
Cow image by FFCU (Free for Commercial Use) by Creative Commons License
Over the years, I’ve made my share of mistakes and learned about the types of questions and topics that participants have a hard time answering accurately in user research. Most people do try to answer your questions, but they may not be able to easily and accurately answer these types of questions:
- Remembering details about the past
- Predicting what they might do in the future
- Accurately answering a hypothetical question
- Discussing the details of their tasks out of context
- Telling you what they really need
- Imagining how something might work
- Envisioning an improved design
- Distinguishing between minuscule design differences
- Explaining the reasons for their behavior
I discuss these types of difficult questions, and better ways to get that information from participants, in my latest article on UXmatters:
Avoiding Hard-to-Answer Questions in User Interviews.
Image credit: Véronique Debord-Lazaro on Flickr
Today I published an article in UXmatters, Testing Your Own Designs. It’s often been said that you shouldn’t conduct usability testing on your own designs, because you may be too biased, defensive, or too close to the design to be an impartial facilitator. Although that may be the ideal, often UX designers don’t have a choice. They may be the only person available to test the design, so if they don’t test it, no one will. So in this article I provide advice for those times when you have to test your own design, and I also provide advice for when someone else tests your design.
I was hesitant to write this article, because it’s been a topic that many others have written about, but I felt that as someone who has been on all sides of the issue, I had something additional to add. Here are some other good articles about this topic:
Should Designers and Developers Do Usability? by Jakob Nielsen
BECAUSE NOBODY’S BABY IS UGLY … SHOULD DESIGNERS TEST THEIR OWN STUFF? by Cathy Carr at Bunnyfoot
What do these three things have in common – playing in a one-man band, juggling chainsaws, and babysitting 10 three-year-olds? When you try to do all of these things at the same time, it’s only slightly more difficult than conducting field studies.
Of course, I’m just kidding, but not by much. In my opinion, field studies are the most difficult user research technique for three reasons: unpredictability, the need to learn about unfamiliar domains, and the need to deal with competing demands. There’s not much you can do about unpredictability or the need to learn a new domain, but there are things that you can do to better handle the competing demands of field studies.
In my latest article on UXmatters, I discuss these competing demands and how to best handle them:
- Observing and listening
- Determining whether and when to ask questions
- Formulating questions
- Assessing answers
- Managing the session
- Assessing the session
- Keeping track of the time
- Managing observers
- Capturing the session
- Maintaining a good rapport with the participant
Read more in my latest article, Handling the Competing Demands of Field Studies.
Image credit: Highways England on Flickr
In previous research projects, there have been several times when participants were suspicious of our motives. This tends to happen when you’re doing research with a group of employees, trying to understand their work processes. These are the times that you’re trying to observe what they do in their jobs more than studying an existing system.
When there’s a direct connection to an application that they use, people tend to feel less suspicious. They can see that you’re trying to understand how well the application works and where it can be improved. This especially happens when it’s in a company that’s had poor previous experiences with reorganizations, layoffs, and offshoring. People tend to see us as another group of consultants coming in to study how they do their work to see what can be improved or who can be eliminated.
So how do you reassure people in these difficult situations about your true purpose? I wrote a recent article about this at UXmatters – Winning Over Wary Participants. Check it out, and if you have additional tips to make people feel more comfortable in these situations, feel free to leave a comment.
Image: Jake Sutton
Are consent forms always necessary? We’re told that consent forms are an indispensable part of ethical user research. Consent forms are the vehicle to give and get informed consent – they inform the participants of what the study will entail and they allow the participant to indicate consent – with a signature and date.
Yet consent forms can conflict with the informal, friendly rapport that we try to establish with participants. Anything you present for people to sign immediately looks like a legal document or liability waiver. It puts them on guard.
That’s ironic because consent forms are the opposite of legal waivers. Legal documents are created to protect the interests of the company that creates them, while consent forms are created to protect the rights of the people signing them. Yet most participants assume they are signing a typical legal waiver.
Consent forms seem acceptable in more formal user research situations, such as usability testing and focus groups, but they seem odd and even off-putting when used in more informal situations. I’ve found them to be especially awkward when doing field studies at people’s offices. You strive to set up an informal situation, such as asking someone to show you how they create reports or asking them to try out a new design for an expense report application. But when you show up with a consent form for them to sign, it shatters the informal, comfortable rapport you tried so hard to establish. I’ve had people react to consent forms in this kind of situation with, “Hey! I thought we were just talking here.” How many times in the course of your work-life have you had someone show up to a meeting with a legal document for you to sign?
So I say use your judgment. When a consent form feels like it would be overly formal, don’t use it (unless your legal department requires it). Instead, get informed consent informally by email. “Inform” with your email describing what will take place, and get “consent” from their reply email agreeing to participate. At the start of the session, you can inform them again with a summary of what you’ll be doing. They will then give consent by continuing to participate in the session.
A good guideline is how comfortable or uncomfortable you feel when giving participants the consent form. If you feel uncomfortable, you’re probably breaking a group norm. So you should find a more acceptable way of getting informed consent.