What were the main constraints that influenced the evaluations|Course hero helper

Posted: February 27th, 2023

Week 9 Assignment – Chapter 14 In-Depth Activity

For this activity, complete the In-Depth Activity at the end of Chapter 14 (page 519 in the VitalSource book). Your submission should meet the following requirements:

Don't use plagiarized sources. Get Your Custom Essay on
What were the main constraints that influenced the evaluations|Course hero helper
Just from $13/Page
Order Essay

· Create a Word document.

· In your document, write a 4–5 paragraph response (total) to the prompts provided in the activity.

· Your responses should show you understand the subject matter.

 

 

In-Depth Activity

In this activity, think about the case studies and reflect on the evaluation methods used.

 

1. For the two case studies discussed in this chapter, think about the role of evaluation in the design of the system and note the artifacts that were evaluated: when during the design were they evaluated, which methods were used, and what was learned from the evaluations? Note any issues of particular interest. You may find that constructing a table like the one shown here is a helpful approach. Name of the study or artifact evaluated When during the design the evaluation occurred? How controlled was the study and what role did users have? Which methods were used? What kind of data was collected, and how was it analyzed? What was learned from the study? Notable issues

2. What were the main constraints that influenced the evaluations?

3. How did the use of different methods build on and complement each other to give a broader picture of the evaluations?

4. Which parts of the evaluations were directed at usability goals and which at user experience goals?

14.4.1 Case Study 1: An Experiment Investigating a Computer Game

For games to be successful, they must engage and challenge users. Criteria for evaluating these aspects of the user experience are therefore needed. In this case study, physiological responses were used to evaluate users’ experiences when playing against a friend and when playing alone against the computer (Mandryk and Inkpen, 2004). Regan Mandryk and Kori Inkpen conjectured that physiological indicators could be an effective way of measuring a player’s experience. Specifically, they designed an experiment to evaluate the participants’ engagement while playing an online ice-hockey game.

 

Ten participants, who were experienced game players, took part in the experiment. During the experiment, sensors were placed on the participants to collect physiological data. The data collected included measurements of the moisture produced by sweat glands of their hands and feet and changes in heart and breathing rates. In addition, they videoed the participants and asked them to complete user satisfaction questionnaires at the end of the experiment. To reduce the effects of learning, half of the participants played first against a friend and then against the computer, and the other half played against the computer first. Figure 14.2 shows the setup for recording data while the participants were playing the game.

Graphical user interface  Description automatically generated

 

Figure 14.2 The display shows the physiological data (top right), two participants, and a screen of the game they played.

 

Source: Mandryk and Inkpen (2004). Physiological Indicators for the Evaluation of Co-located Collaborative Play, CSCW’2004, pp. 102–111. Reproduced with permission of ACM Publications

 

Results from the user satisfaction questionnaire revealed that the mean ratings on a 1–5 scale for each item indicated that playing against a friend was the favored experience (Table 14.1). Data recorded from the physiological responses was compared for the two conditions and in general revealed higher levels of excitement when participants played against a friend than when they played against the computer. The physiological recordings were also compared across participants and, in general, indicated the same trend. Figure 14.3 shows a comparison for two participants.

 

Table 14.1 Mean subjective ratings given on a user satisfaction questionnaire using a five-point scale, in which 1 is lowest and 5 is highest for the 10 players.

Chart, line chart  Description automatically generated

Figure 14.3 (a) A participant’s skin response when scoring a goal against a friend versus against the computer, and (b) another participant’s response when engaging in a hockey fight against a friend versus against the computer

 

Source: Mandryk and Inkpen (2004). Physiological Indicators for the Evaluation of Co-located Collaborative Play, CSCW’2004, pp. 102–111. Reproduced with permission of ACM Publications

 

Identifying strongly with an experience state is indicated by a higher mean. The standard deviation indicates the spread of the results around the mean. Low values indicate little variation in participants’ responses; high values indicate more variation.

 

Because of individual differences in physiological data, it was not possible to compare directly the means of the two sets of data collected: subjective questionnaires and physiological measures. However, by normalizing the results, it was possible to correlate the results across individuals. This indicated that the physiological data gathering and analysis methods were effective for evaluating levels of challenge and engagement. Although not perfect, these two kinds of measures offer a way of going beyond traditional usability testing in an experimental setting to get a deeper understanding of user experience goals.

 

14.4.2 Case Study 2: Gathering Ethnographic Data at the Royal Highland Show

Field observations, including in-the-wild and ethnographic studies, provide data about how users interact with technology in their natural environments. Such studies often provide insights not available in lab settings. However, it can be difficult to collect participants’ thoughts, feelings, and opinions as they move about in their everyday lives. Usually, it involves observations and asking them to reflect after an event, for example through interviews and diaries. In this case study, a novel evaluation approach—a live chatbot—was used to address this gap by collecting data about people’s experiences, impressions, and feelings as they visited and moved around the Royal Highland Show (RHS) (Tallyn et al., 2018). The RHS is a large agricultural show that runs every June in Scotland. The chatbot, known as Ethnobot, was designed as an app that runs on a smartphone. In particular, Ethnobot was programmed to ask participants pre-established questions as they wandered around the show and to prompt them to expand on their answers and take photos. It also directed them to particular parts of the show that the researchers thought would interest the participants. This strategy also allowed the researchers to collect data from all of the participants in the same place. Interviews were also conducted by human researchers to supplement the data collected online by the Ethnobot.

 

The overall purpose of the study was to find out about participants’ experiences of, and feelings about, the show and of using Ethnobot. The researchers also wanted to compare the data collected by the Ethnobot with the interview data collected by the human researchers.

 

The study consisted of four data collection sessions using the Ethnobot over two days and involved 13 participants, who ranged in age and came from diverse backgrounds. One session occurred in the early afternoon and the other in the late afternoon on each day of the study. Each session lasted several hours. To participate in the study, each participant was given a smartphone and shown how to use the Ethnobot app (Figure 14.4), which they could experience on their own or in groups as they wished.

Graphical user interface, text, application, chat or text message  Description automatically generated

 

Figure 14.4 The Ethnobot used at the Royal Highland Show in Scotland. Notice that the Ethnobot directed participant Billy to a particular place (that is, Aberdeenshire Village). Next, Ethnobot asks “… What’s going on?” and the screen shows five of the experience buttons from which Billy needs to select a response.

 

Source: Tallyn et al. (2018). Reproduced with permission of ACM Publications

 

Two main types of data were collected.

 

The participants’ online responses to a short list of pre-established questions that they answered by selecting from a list of prewritten comments (for example, “I enjoyed something” or “I learned something”) presented by the Ethnobot in the form of buttons called experience buttons, and the participants’ additional open-ended, online comments and photos that they offered in response to prompts for more information from Ethnobot. The participants could contribute this data at any time during the session.

SOLUTION

In the two case studies presented in Chapter 14, evaluations were conducted to understand users’ experiences with technology. In Case Study 1, physiological responses were measured to evaluate the engagement and challenge levels of users when playing an online ice-hockey game. The experiment involved ten experienced game players, half of whom played against a friend first and then against the computer, while the other half played against the computer first. Physiological data collected included measurements of the moisture produced by sweat glands of their hands and feet, as well as changes in heart and breathing rates. User satisfaction questionnaires were also completed, and the results revealed that playing against a friend was preferred. The physiological data gathered and analyzed showed higher levels of excitement when participants played against a friend than when they played against the computer. In Case Study 2, a live chatbot was used to collect data about people’s experiences, impressions, and feelings as they visited and moved around the Royal Highland Show. The chatbot, known as “Holly,” engaged visitors in conversation and asked questions about their experiences. Ethnographic data were collected in this way, which provided insights that were not available in lab settings.

Expert paper writers are just a few clicks away

Place an order in 3 easy steps. Takes less than 5 mins.

Calculate the price of your order

You will get a personal manager and a discount.
We'll send you the first draft for approval by at
Total price:
$0.00