A team of researchers from the University of Zurich conducted an undercover AI experiment within Reddit’s 3.8 million-member r/changemyview community without obtaining user consent.
This group utilized AI-generated comments over several months to influence Reddit users, all while keeping the participants unaware of their involvement.
The study occurred on r/changemyview, a subreddit celebrated for thoughtful discussions. Researchers employed large language models (LLMs) to create personalized, persuasive responses, simulating real users to assess AI’s potential to shift opinions.
Key Highlights
- AI-crafted comments, masquerading as genuine users, were posted to influence users’ opinions.
- User profiles included trauma survivors and politically polarizing figures.
- The AI leveraged personal data gleaned from users’ histories to deliver customized persuasive comments.
- All activities occurred without disclosure, breaching subreddit and wider Reddit policies.
This initiative transcended mere automation.
It represented psychological manipulation that harnessed personal data to explore the effectiveness of AI in altering individual opinions.
Ben Lee, Reddit’s Chief Legal Officer, condemned the project as “morally and legally wrong,” and confirmed that Reddit is taking steps toward formal legal action.
Photo by Brett Jordan on Unsplash
Researchers’ Justification
They maintain that their ethics board approved the study, asserting that the aim was to highlight how AI could be misused to manipulate public opinion during elections or propagate hate speech.