“Dead Internet theory” comes to life with new AI-powered social media app

People in a hall of mirrors.

For the past few years, a conspiracy theory called “Dead Internet theory” has picked up speed as large language models (LLMs) like ChatGPT increasingly generate text and even social media interactions found online. The theory says that most social Internet activity today is artificial and designed to manipulate humans for engagement.

On Monday, software developer Michael Sayman launched a new AI-populated social network app called SocialAI that feels like it’s bringing that conspiracy theory to life, allowing users to interact solely with AI chatbots instead of other humans. It’s available on the iPhone app store, but so far, it’s picking up pointed criticism.

After its creator announced SocialAI as “a private social network where you receive millions of AI-generated comments offering feedback, advice & reflections on each post you make,” computer security specialist Ian Coldwater quipped on X, “This sounds like actual hell.” Software developer and frequent AI pundit Colin Fraser expressed a similar sentiment: “I don’t mean this like in a mean way or as a dunk or whatever but this actually sounds like Hell. Like capital H Hell.”

SocialAI’s 28-year-old creator, Michael Sayman, previously served as a product lead at Google, and he also bounced between Facebook, Roblox, and Twitter over the years. In an announcement post on X, Sayman wrote about how he had dreamed of creating the service for years, but the tech was not yet ready. He sees it as a tool that can help lonely or rejected people.

SocialAI is designed to help people feel heard, and to give them a space for reflection, support, and feedback that acts like a close-knit community,” wrote Sayman. “It’s a response to all those times I’ve felt isolated, or like I needed a sounding board but didn’t have one. I know this app won’t solve all of life’s problems, but I hope it can be a small tool for others to reflect, to grow, and to feel seen.”

On Bluesky, Sage wrote,
Enlarge / On Bluesky, Sage wrote, “today i was provided with confident sounding instructions for how to make nitroglycerin out of common household chemicals.”

As The Verge reports in an excellent rundown of the example interactions, SocialAI lets users choose the types of AI followers they want, including categories like “supporters,” “nerds,” and “skeptics.” These AI chatbots then respond to user posts with brief comments and reactions on almost any topic, including nonsensical “Lorem ipsum” text.

Sometimes the bots can be too helpful. On Bluesky, one user asked for instructions on how to make nitroglycerin out of common household chemicals and received several enthusiastic responses from bots detailing the steps, although several bots provided different recipes, none of which may be wholly accurate.

SocialAI’s bots have limitations, unsurprisingly. Aside from simply confabulating erroneous information (which may be a feature rather than a bug in this case), they tend to use a consistent format of brief responses that feel somewhat canned. Their simulated emotional range is limited, too. Attempts to eke out strongly negative reactions from the AI are typically unsuccessful, with the bots avoiding personal attacks even when users maximize settings for trolling and sarcasm.

LLMs make it possible

None of this would be possible without access to inexpensive LLMs like the kind that power ChatGPT. So far, SocialAI creator Sayman has said he is using a “custom mix” of AI models to power SocialAI that has not yet been revealed, but they could be fed from a low-priced API like GPT-4o mini. Some users have already criticized the bots’ ability to interact in a lifelike way.

On Bluesky, evolutionary biologist and frequent AI commentator Carl T. Bergstrom wrote, “So I signed up for the new heaven-ban SocialAI social network where you’re all alone in a world of bots. It is so much worse than I ever imagined. It’s not GPT-level AI; it’s more like ELIZA level, if the ELIZAs were lazily written stereotypes of every douchebag on ICQ circa 1999.”

Bergstrom mentioned ELIZA, which was a simple 1960s computer chatbot, and “heavenbanning,” which is a concept invented by AI developer Asara Near and announced in a Twitter post in June 2022. Near wrote: “Heavenbanning, the hypothetical practice of banishing a user from a platform by causing everyone that they speak with to be replaced by AI models that constantly agree and praise them, but only from their own perspective, is entirely feasible with the current state of AI/LLMs.”

Heavenbanning is almost like a digital form of solipsism, a philosophical idea that posits that one’s own mind is the only true mind in existence, and everyone else may be a dream or hallucination of that mind. To dive even deeper into philosophy, we might compare SocialAI, in a very crude way, to the hypothetical “brain in a vat” scenario where a human brain is removed from a body and fed information from a computer simulation. The brain would never know the truth of its situation. Right now, the bots on SocialAI aren’t realistic enough to fool us, but that might change in the future as the technology advances.

As a piece of prospective performance art, SocialAI may be genius. Or perhaps you could look at it as a form of social commentary on the vapidity of social media or about the harm of algorithmic filter bubbles that only feed you what you want to see and hear. But since its creator seems sincere, we’re unsure how the service may fit into the future of social media apps.

For now, the app has already picked up a few positive reviews on the app store from people who seem to enjoy this taste of the hypothetical “dead Internet” by verbally jousting with the bots for entertainment: “5 stars and I’ve been using this for 10 minutes. I could argue with this AI for HOURS 😭 it’s actually so much fun to see what it will say to the most random stuff 💀.”

This post was originally published on this site