In a fascinating AI social experiment, 500 chatbots took center stage on a pseudo-Twitter battleground, crafting a narrative far more serene than the chaotic madness of real-life social media.
Computational social scientist Petter Törnberg and his team created a digital landscape full of personas, politics, and a dash of unpredictability, revealing insights that may redefine our virtual landscapes.
Törnberg's Chatbots
Törnberg breathed life into 500 Artificial Intelligence (AI) chatbots, each a digital puppet with a distinct persona—age, gender, income, religion, politics, and peculiar preferences, all of which he explained to Business Insider. The idea was to create 500 personalities with which to populate an imagined, custom-made microblogging Twitter-like platform. It’s worth remembering that Twitter has been labelled as the platform most likely to contain misinformation and hate speech.
Echo Chambers, Discoveries, and Algorithms
The bots, fueled by news from July 1, 2020, stepped into a Twitter-esque arena, embracing three distinct models. The “Echo Chamber”, a tranquil haven resonating with shared ideologies; "Discover," a whirlwind of high engagement veiled in occasional negativity; and the "Bridging Algorithm," where opposing bots found unlikely camaraderie amid resounding engagement. The question being posed was whether these algorithmic creations could pave the way to a less polarized online utopia.
The Echo Chamber highlighted posts from other bots with shared ideologies. This platform was calm, but quiet. Discover populated feeds with the posts with the most likes. Predictably there was high, but often negative, engagement. The Bridging Algorithm showed bots the posts with the most likes, but only from the bots with opposite political beliefs. This Twitter also saw high engagement, but surprisingly, the bots often found common ground.
Surprisingly Less Hate
So, the takeaway is that discussion of partisan issues needn’t result in clashes, provided that the numbers of participants on both sides are roughly equal. Törnberg told Insider that when discussing partisan issues “if 50% of the people you agree with vote for a different party than you do, that reduces polarization. Your partisan identity is not being activated.”
Interestingly, political scientist Lisa Argyle from Brigham Young University sees a glimmer of hope. She notes that these AI models, with identity profiles mimicking humans, might be the heralds of a more civil social media discourse. Perhaps, in this algorithmic dance, lies the key to a harmonious online future.