In my latest article on UXmatters, Five Degrees of User Assistance, I bring up a character that people love to hate – Clippy, of course! Although I do have sort of a soft spot for the little guy, he is a great example of unwanted user assistance.
Poor Clippy! It really wasn’t his fault, he came along at a time when computers were too stupid to accurately predict when people needed help. Programmed to jump out when certain events occurred, to enthusiastically offer his assistance, instead he came across as an unwanted interruption and annoyance.
Today, as technology becomes increasingly intelligent, computers are smart enough to provide more appropriate and more accurate user assistance. In my latest article I describe these five levels of user assistance:
- Passively providing online Help content. Here’s help if you need it.
- Asking if the user needs help. Can I help you?
- Proactively offering suggestions that users can accept or ignore. Is this what you want, or do you want to correct this?
- Alerting the user that it’s going to take an action automatically, unless the user says not to. I’m going to do this, unless you tell me not to.
- Automatically taking an action for the user, without asking for permission. I’ve got this for you. Don’t worry about it.
Check it out at UXmatters: Five Degrees of User Assistance
Image source: Clippy, created by J. Albert Bowden II and licensed under CC BY 2.0
In my latest UXmatters article, I compare the latest prototyping tools to paper prototyping. Paper has long had the advantage in allowing designers to quickly and easily create early prototypes, that look unfinished, and encourage users to honestly provide criticism. However, the latest prototyping tools have caught up to, and in some cases surpassed, paper in making it very easy and quick to create prototypes without any coding.
So, do the advantages of paper prototypes still beat these new prototyping tools? That’s what I explore in my latest article, Prototyping: Paper Versus Digital.
Image credit: Samuel Mann
I just published an article in UXmatters, Conducting Qualitative, Comparative Usability Testing. It’s about conducting usability testing with two or more designs, early in the design process to get better information and better user feedback, before settling too soon on one particular design.
When participants are able to experience multiple designs, they can provide better feedback. As a result, you can gain greater insight into the elements of each design that work well and those that cause problems.
Over the years, I’ve made my share of mistakes and learned about the types of questions and topics that participants have a hard time answering accurately in user research. Most people do try to answer your questions, but they may not be able to easily and accurately answer these types of questions:
- Remembering details about the past
- Predicting what they might do in the future
- Accurately answering a hypothetical question
- Discussing the details of their tasks out of context
- Telling you what they really need
- Imagining how something might work
- Envisioning an improved design
- Distinguishing between minuscule design differences
- Explaining the reasons for their behavior
I discuss these types of difficult questions, and better ways to get that information from participants, in my latest article on UXmatters:
Avoiding Hard-to-Answer Questions in User Interviews.
Image credit: Véronique Debord-Lazaro on Flickr
Today I published an article in UXmatters, Testing Your Own Designs. It’s often been said that you shouldn’t conduct usability testing on your own designs, because you may be too biased, defensive, or too close to the design to be an impartial facilitator. Although that may be the ideal, often UX designers don’t have a choice. They may be the only person available to test the design, so if they don’t test it, no one will. So in this article I provide advice for those times when you have to test your own design, and I also provide advice for when someone else tests your design.
I was hesitant to write this article, because it’s been a topic that many others have written about, but I felt that as someone who has been on all sides of the issue, I had something additional to add. Here are some other good articles about this topic:
Testing Your Own Designs: Bad Idea? and Testing Your Own Designs Redux by Paul Sherman
Should Designers and Developers Do Usability? by Jakob Nielsen
BECAUSE NOBODY’S BABY IS UGLY … SHOULD DESIGNERS TEST THEIR OWN STUFF? by Cathy Carr at Bunnyfoot
What do these three things have in common – playing in a one-man band, juggling chainsaws, and babysitting 10 three-year-olds? When you try to do all of these things at the same time, it’s only slightly more difficult than conducting field studies.
Of course, I’m just kidding, but not by much. In my opinion, field studies are the most difficult user research technique for three reasons: unpredictability, the need to learn about unfamiliar domains, and the need to deal with competing demands. There’s not much you can do about unpredictability or the need to learn a new domain, but there are things that you can do to better handle the competing demands of field studies.
In my latest article on UXmatters, I discuss these competing demands and how to best handle them:
- Observing and listening
- Determining whether and when to ask questions
- Formulating questions
- Assessing answers
- Managing the session
- Assessing the session
- Keeping track of the time
- Managing observers
- Capturing the session
- Maintaining a good rapport with the participant
Read more in my latest article, Handling the Competing Demands of Field Studies.
Image credit: Highways England on Flickr