Researchers built a fake Twitter populated with bots — not to be confused with actual Twitter, now X, which is only mostly bots.
And it turns out there’s some promise that these kinds of simulacra can actually tell us something about humans, per a fascinating read from Business Insider.
How it worked
Lead scientist Petter Törnberg and his team built 500 chatbots using ChatGPT 3.5. Each had a persona, specifying its age, gender, income level, religion, politics, preferences, etc.
The bots were fed news from July 1, 2020 and let loose inside a Twitter-like social media platform to discuss it.
Why? To study how to build a better social network, given the idea that large language models (LLMs) — designed to act like people conversing — would allow researchers to efficiently study human behavior.
So what happened?
The study used three different models for how its Twitter functioned:
- The echo chamber showed posts from other bots with shared ideologies. This Twitter was pleasant, but quiet.
- “Discover” populated feeds with the posts with the most likes. It saw high — but often negative — engagement.
- “Bridging algorithm” showed bots the posts with the most likes, but only from the bots with opposite political beliefs. This Twitter also saw high engagement, but the divergent bots often found common ground.
Törnberg told Insider that when discussing partisan issues “if 50% of the people you agree with vote for a different party than you do, that reduces polarization. Your partisan identity is not being activated.”
Of course…
… these are just bots in a sandbox and there’s still work to be done when it comes to training methods and ethics.
But Lisa Argyle, a political scientist at Brigham Young University, said LLMs with identity profiles like these often answer survey questions similar to the humans they were modeled after — so maybe there’s hope for social media yet.