Recently, Zhang Dan's research group from the Department of Psychology at Tsinghua University published a paper titled "Emotional Experience during Human-Computer Interaction: A Survey" in the international human-computer interaction journal International Journal of Human-Computer Interactions. The co-first authors of this study include Tang Lilu, a graduated master's student from our department, and Yuan Peijun, a researcher from the Qiyuan Laboratory. Associate Professor Zhang Dan is the corresponding author.
The research collected retrospective self-report data on the frequency of emotional experiences across 44 different emotions from 400 participants through an online questionnaire platform. These emotions were evaluated in six representative human-computer interaction scenarios (online chat, online meetings, online learning, online shopping, online navigation, online banking) and compared with emotions experienced in six corresponding face-to-face interaction scenarios (offline chat, offline meetings, offline learning, shopping in physical stores, asking strangers for directions, bank counter services). The results show that when computers serve as content carriers, people experience fewer positive emotions compared to corresponding interpersonal interaction scenarios; however, when computers serve as interaction objects, people experience more positive emotions than in corresponding interpersonal interaction scenarios. Factor analysis of the emotional data from human-computer interaction scenarios indicates that emotions in human-computer interaction can be summarized into seven types: (1) low-arousal positive focus factor, (2) positive engagement and enjoyment factor, (3) lyrical or emotional resonance factor, (4) positive surprise factor, (5) high-arousal negative factor, (6) confusion/misunderstanding factor, and (7) boredom/tedium factor.
This study explores detailed categories of emotional experiences across diverse human-computer interaction scenarios. The proposed human-computer interaction emotional model specifically targets contextual characteristics and can more effectively describe emotional states experienced during human-computer interaction. It has the potential to become a more suitable theoretical framework for research on emotions in human-computer interaction.
Paper Information:
https://www.tandfonline.com/doi/full/10.1080/10447318.2023.2259710
References:
Cherry EC. 1953. Some experiments on the recognition of speech, with one and with two ears. J Acoust Soc Am. 25:975–979.
Gadamer HG. 1975. Truth and Method, A Continuum book. Seabury Press.
Hartley CA, Poeppel D. 2020. Beyond the Stimulus: A Neurohumanities Approach to Language, Music, and Emotion. Neuron. 108:597–599.
Searle JR. 1980. Minds, brains, and programs, the MIT press.
Skinner BF. 1957. Verbal behavior. Appleton-Century-Crofts.