Friess, Erin. “Discourse Variations between Usability Evaluation and Usability Reports.” Journal of Usability Studies 6, no. 3 (2011): 102-16.
Erin Friess evaluates the differences between usability evaluations and usability reports. She investigated the usability evaluation process and compared the results against the language of usability reports and established that many usability evaluators modified the language and outcomes of the usability studies in their reports. Friess suggests several reasons for this discrepancy and calls for further research into the topic.
Friess used a comparison of language used in end-user usability participants with that of the language in the usability reports submitted by the usability evaluators. After examining her results, Friess established that:
- 84% of findings had some basis in the usability evaluation
- 16% of findings had no basis from the usability evaluation
Of the 84% of findings that had some basis:
- 55% were accurate findings
- 29% were potential inaccurate.
In each category, soundbites and interpretation were the key attributors to the evaluator’s findings. From her results, Friess suggests a number of reasons for the discrepancies found between usability evaluations and usability reports.
- Confirmation bias in oral reports
- Bias in what’s omitted in the usability reports
- Biases in Client desires
- Poor interpretative skills
In each of these categories, biases are natural and are to be expected. Adequate training should be able to correct the impact biases have. Poor interpretative skills, however, are a challenge to overcome. Poor interpretative skills lead to biases, as those conducting usability tests will guide end-users towards predetermined conclusions if they anticipate potential issues or chose to interpret end-user behavior in line with what they expect to see. Educating usability testers on how to interpret end-user comments, behavior, and questions will go a long way to reducing the discrepancy between end-user usability evaluations and usability evaluator reports.
How can usability evaluators improve interpretative skills?
Redish, Janice. “Technical Communication and Usability: Intertwined Strands and Mutual Influences.” IEEE Transactions on Professional Communication 53, no. 3 (2010): 191-201.
Janice Redish’s essay is an answer to the multipart question, “How can Technical Communicators contribute to [a new approach to usability] the evaluation of more complex systems, to the more open exchange of data and methods, to the redefinition of the profession”?
She takes up this question as someone with an extensive background in technical communication and begins her response with her perceived idea of the history of technical communication. Despite some in the industry defining the start of usability studies starting in the late 1980s, Redish asserts that she, along with others, were conducting “usability studies” as early as the 1970s. Redish continues through the decades, highlighting examples of technical communicators and industry professionals who intertwined designing, writing, and publishing products with usability testing. At each stage Redish suggests that usability and technical communication are inextricably linked, and how many technical communicators morph into usability analysts or how usability analysts are more and more involved with the art and science of communication.
Following this brief overview of usability and technical communication history, Redish suggests four areas answering why technical communicators and usability designers are so inextricably linked:
- Need for excellent collaboration skills
- Ability to communicate clearly to multiple audiences
- Understanding, and clarifying, complexity
- Being open to change; quick to adopt and adapt new skills and new technologies
Since technical communicators and usability experts share these traits, Redish argues that the future of both professions is inextricably linked and that both fields should be ripe for cross-pollination.
While Redish suggests multiple reasons for the intertwining of technical communication and usability, do you think that the future of either profession lies in a combined role? Why or why not?
Carol M. Barnum & Laura A. Palmer, “More than a Feeling: Understanding the Desirability Factor in User Experience” (paper presented at the annual meeting for ACM Conference on Human Factors in Computing Systems (CHI 2010), Atlanta, Georgia, April 10-15, 2010).
Barnum and Palmer report the results of several case studies investigating the use of user experience cards, which are used to determine user satisfaction. The authors contend that of the three major elements of gauging product usability (effectiveness, efficiency, and satisfaction), user satisfaction is the most nebulous and the most difficult to accurately determine. The primary obstacle is “acquiescence bias,” where usability participants tend towards higher value ratings than user observation suggests.
Over the course of five case studies, administered between 2006 and 2009, Barnum and Palmer took Microsoft’s desirability cards and used modified version of these cards to determine user satisfaction. They determined that the cards were useful in suggesting user satisfaction to a high level of consistency, although the authors note that the small scope of their study limits the wide-ranging impact such findings suggest.
The key finding from Barnum and Palmer lies in providing a model that best determines the true feelings of user participants and overcomes (to a degree) acquiescence bias among product evaluators. Especially when user experience cards are combined with other usability evaluation tools such as video evidence, interviews, observations, and standard post-test questionnaires, a true sense of participant thoughts can be established.
Will more or fewer user experience cards generate better responses from study participants?