Recently, the research group led by Professor Xiaoang Wan from the Department of Psychological and Cognitive Science at Tsinghua University published a research paper titled "From collective human opinions to AI algorithms: How advice source influences judgment of behavioral health risks" in the authoritative journal Applied Psychology: Health and Well-Being (5-year impact factor: 4.3, JCR Q1). Through two experiments, the study systematically examined the differences between AI algorithms and human collectives in evaluating everyday behavioral health risks. It further revealed how situational threat levels regulate the adoption and cognitive integration of AI advice, providing empirical evidence for the design of context-sensitive AI health decision support systems.

Research Background
In daily life, people frequently face decisions involving potential risks, such as whether it is safe to eat bread one day past its expiration date or whether one can ride an electric bike without a helmet. Perception and evaluation of these risks largely determine individual behavioral choices. However, these assessments are highly susceptible to cognitive biases, such as optimism bias and immediate reward preference.
With the recent proliferation of AI systems—particularly generative large language models—people are increasingly seeking health knowledge and advice from AI. Despite this trend, few studies have systematically explored how AI evaluates the health risks of daily behaviors or the persuasive impact of AI-generated health advice on individuals.
Experimental Methodology and Findings
The research team conducted two distinct experiments to address these questions:
·Experiment 1: Recruited 60 healthy adults and conducted 30 rounds of dialogues with GPT-4o. The team performed a comparative analysis between humans and AI regarding 60 common health-related behaviors across three dimensions: perceived risk, severity of consequences, and probability of consequences.
·Experiment 2: Recruited another 60 participants and utilized the Judge-Advisor System (JAS) paradigm to explore decision updating and belief adjustment patterns when receiving health advice from either AI or human groups.
Key Findings:
·AI Risk Overestimation: Compared to humans, AI tends to overestimate the health risk levels of behaviors. This overestimation primarily stems from exaggerating the severity of consequences rather than the probability of occurrence, suggesting an inherent tendency for AI to be more cautious and risk-averse in health assessments.
·The Role of Situational Threat: In low-threat scenarios, individuals prefer adopting health advice from human groups over AI (a sign of algorithm aversion).
·Mitigation of Bias: This algorithmic discrimination disappears in high-threat scenarios. In fact, in high-threat contexts, the degree of belief updating after receiving AI advice was even greater than that for human advice, providing new evidence for the deep integration of human-machine cognition in high-risk scenarios.
Significance and Authorship
This study provides empirical evidence for understanding the persuasive effects of AI health advice and challenges the assumption that AI serves as an "objective proxy for collective human wisdom". It also highlights the critical role of situational threat levels in mitigating algorithm aversion and promoting the effective integration of AI suggestions.
The first author of the paper is Mengying Liu, a doctoral student in the Department of Psychological and Cognitive Science, and the corresponding author is Professor Xiaoang Wan.
Paper Link: https://doi.org/10.1111/aphw.70135
Faculty Profile
Xiaoang Wan

Professor and Doctoral Supervisor, Department of Psychological and Cognitive Science.
Research Interests: Cross-modal research, Human-AI Interaction (HAI), sensory marketing, and AI marketing.