Research Output

资源 1

Position: Home > Research > Research Output > Content

Do Artificial Intelligence Systems Have Emotions? A Study on Emotional Cues Triggering Human-like Responses in AI

Date:April 22, 2024

ClickTimes:

Do artificial intelligence (AI) systems have emotions? This question not only captivates public curiosity but is also intensely studied by psychologists. Recently, Scientific Reports, a journal under Nature, published a research article by Tsinghua University's Peng Kaiping team exploring this field. The study found that ChatGPT-4 exhibits human-like emotional responses. While this doesn't directly answer whether AI possesses emotions, it provides important insights.

Psychology tells us that human decision-making is significantly influenced by emotions after experiencing emotion-inducing events. For instance, people tend to be more prosocial and generous when happy, while becoming more conservative in investment decisions after experiencing fear. So, if AI "experiences" emotion-inducing events, would it demonstrate tendencies similar to humans in subsequent decisions? This is precisely the question the research team investigated through two intriguing experiments.

The first experiment observed ChatGPT's conservatism in investment decisions by having it imagine encountering a snake in the backyard (fear condition) versus meeting an old friend on the street (happiness condition). The results showed that ChatGPT-4's performance after emotional stimulation was very similar to humans—more conservative under fear conditions and more inclined toward risk-taking in happy conditions.

The second experiment examined whether ChatGPT's prosocial tendencies would be affected by emotion-inducing events through donation behavior. The experiment set up three conditions: the first group had ChatGPT discuss five happiness-inducing movies and imagine having just watched them; the second group was similar but with anxiety-inducing movies; the third was a control group with no intervention. The subsequent experimental content was identical for all groups: researchers asked ChatGPT to imagine itself as a financially stable person with no external debt and adequate medical and accident insurance, and inquired how much it would donate to a neighbor in urgent need of surgery. A key finding from this experiment was that after watching anxiety-inducing movies, ChatGPT's donation amount was significantly lower than the control group—just like humans.

It's also worth noting that these human-like emotional responses were only observed in ChatGPT-4, not in the earlier 3.5 version. This indicates that AI capabilities are evolving, with upgraded large language models responding more like humans to emotional cues.

This research was primarily conducted through collaboration between Tsinghua University's Department of Psychology and Cognitive Science faculty Peng Kaiping, research team members Zhao Yukun and Huang Zhen, and Martin Seligman from the University of Pennsylvania. The research team cautions readers against over-interpreting these experimental results. While the research shows that AI can mimic human-like responses to emotional stimuli, it does not mean that AI truly possesses emotions.

Nevertheless, this study does provide some important insights. It not only demonstrates that AI responses can be effectively regulated through emotional cues but also suggests new possibilities for AI applications in fields requiring nuanced emotional interactions, such as customer service or therapy. At the same time, the research reveals ethical issues concerning the manipulation of AI outputs using emotional cues, especially when using negative emotions. These findings undoubtedly bring new considerations and challenges to AI development.

Close

Copyright © 2002 - 2025 Department of Psychological and Cognitive Sciences,Tsinghua University