Chapter
14
:
Introducing Evaluation
In-Depth Activity Comments
This question is designed to encourage you to analyze the evaluation case studies in detail and to compare them. By doing so you will discover more about the underlying reasons why the designers and evaluators did what they did. You will also discover that the descriptions of what was done, how it was done and when it was done are incomplete for some case studies, and for published papers that you read. In this case you will have to speculate about the details and make suggestions about what you think happened or what could happen.
The case studies described in this chapter demonstrate how different evaluation methods are used together to complement each other. The advantage of doing this is that they provide different types of data which, when analyzed, offer different perspectives on what is happening. For example, quantitative data from usability tests is often supplemented with observational data and data from user satisfaction questionnaires. You will also see that some methods are used only for examining particular parts of systems or during particular stages of the design. For example, experiments are typically used to compare parts of two different designs. This is because evaluating the whole system may take too long and be too costly or not necessary. In-the-wild-studies are typically carried out to exam the efficacy of state-of-the-art systems to see if they are useful and appeal to the intended audience.
The description of the activity suggests that you may find it useful to complete a table; alternatively, you may wish to write longer descriptions. Whichever approach you adopt be sure to focus on when during the design the evaluations were performed, which methods were used, and what was learned from the evaluations?
Below are some brief comments. Because the questions are relatively open you may have some other ideas. If so, you might like to discuss them with a friend, peer or colleague.
Q1
Q3 In both case studies the researchers selected methods that enabled them to gain different perspectives of the participants’ experiences. Both studies used methods that were relatively constrained that produced quantitative or categorial data that was supplemented with open-ended interviews.
Q4 The more quantitative methods used in both studies focused on the usability goals of the products and the open-ended interviews focused more strongly on user experience goals. However, in each study there was some overlap. For example, some of the online questions in the Ethnobot study focused on the participants’ experiences.