Agnes, a bespectacled 20-year-old poet, likes tarot cards and late-night philosophy debates. She has a crush on Finn, a 22-year-old farmer who knits in his free time. Finn’s friend Jade, 22, also lives in the countryside, where she forages for mushrooms and bakes bread.
Jade, Finn and Agnes are influencers who frequently post selfies and life updates — but none of them actually exist.
They are the creations of 1337, a company that designs and operates artificial intelligence-generated online avatars, sometimes selecting what to post based on advice from real-life teenagers.
The firm makes up one upstart in a burgeoning industry that harnesses AI to draw eyeballs and drive revenue in the lucrative business of social media. By 2035, the so-called “digital human economy” is projected to become a $125 billion market, according to a report from research firm Gartner.
Lil Miquela, an AI-generated influencer made by tech firm Bud, boasts 2.6 million followers on Instagram.
The onset of AI-generated accounts introduces new opportunities for expression on social media, but it also raises questions about the possible use of the avatars to manipulate followers or spread misinformation, Claire Leibowicz, head of the AI and media integrity program at the nonprofit Partnership on AI, told ABC News.
“This brings up fundamental questions of what it means to have human relationships,” Leibowicz said.
1337, pronounced “leet,” refers to its AI-generated influencers as “entities.” Currently, the company says it operates 50 entities, each designed to appeal to a different audience. The company said it plans to add another 50 to its roster in April.
The company begins designing a new entity by identifying a community of people with a specific interest or trait who may take an interest in the influencer, Jenny Dearing, the founder and CEO of 1337, told ABC News. Then, she added, the firm fills out the details.
“We think about everything from how they might live their lives, where they reside, what does their room look like, what are their hobbies,” Dearing said.
The company says humans are involved at multiple points to moderate the creation process, including ongoing curation from people Dearing described as “creators.” Each creator is paired with an entity, helping to filter out flawed AI images and select posts in keeping with a given entity’s persona. Some of the creators are teenagers.
Sawyer Erch, a 16-year-old who lives in Oakland, California, said they spend 20 to 30 minutes each day working as a creator for 1337. Using an app, Erch assesses AI-generated images to be included in future posts.
First, they said they check for abnormalities. “A lot of time, there’ll be extra fingers or extra limbs,” Erch said.
Next, they make sure a prospective post conforms to the entity’s backstory and personality. “If it’s something completely different, then you obviously can’t pick that,” Erch added.
Synthetic influencers offer an opportunity for people of all ages to interact with AI in a low-stakes setting, Erch said.
“AI is going to become more and more prominent,” Erch said. “So I think it’s good for younger people and older people to play around with that.”
Some advocates, however, caution about the risks posed by AI-generated social media posts.
Synthetic fashion influencers seem benign but similar products could be used to deceive their audience on more consequential issues, said Leibowicz, of Partnership on AI.
“While it may seem fine for influencers to be saying that this new lip gloss is really meaningful, we don’t want that same technology to be used to cast doubt on much more,” Leibowicz said. “I would argue there are more high-stakes use cases than lip gloss.”
Some of the original 1337 profiles are identified as being generated by an AI tool, but the company said future profiles may not have the same disclaimer. Creators will determine what to disclose about an entity, the company said.
TikTok requires users to label AI-generated content, according to the company’s content guidelines. Meta — the parent company of Instagram and Facebook — said last month it would begin labeling images created by OpenAI, Midjourney and other AI products.
“I hope this is a big step forward in trying to make sure that people know what they’re looking at, and they know where what they’re looking at comes from,” Nick Clegg, Meta’s president of global affairs, said in an interview with ABC’s “Good Morning America.”
“As the difference between human and synthetic content gets blurred, people want to know where the boundary lies,” Clegg said.
The labeling approach shows a degree of enforcement but may face limitations, Leibowicz said.
“There’s a visceral appeal to putting a little mark on TikTok or a mark on Instagram that says, ‘This is fake,'” Leibowicz added. “But whether or not people care, and whether that will be applied in a context that matters, is a big open question.”
No matter how it gets addressed, AI is here to stay, said M.B., Erch’s mother.
“Pandora’s box is open,” she said. “AI is not going back in the box.”