Social media experts discuss moving beyond ‘discourse dumpster fires’

Social media “is great when it works, but sometimes it can be life-changing in the worst possible way,” Applied Social Media Lab Senior Director Jonathan Bellack told a group at the Harvard Law School campus on Sept. 12. His keynote opened the Berkman Klein Center’s conference, “Beyond Discourse Dumpster Fires: Tools and Strategies for Better Online Civil Space,” which presented strategies for strengthening the positive side of social media.

“Discourse dumpster fires are quick to spark and hard to extinguish,” Bellack said. “[This event will] hopefully persuade you that our lab is a worthwhile effort to create viable alternatives.”

The public interest, he said, “is best served when social media fosters healthy and thriving human interactions.” And social media, he said, “has a responsibility to treat people with a baseline level of dignity, and to make it as easy as possible for people to be their best selves online.”

Despite a commitment to these goals, which he witnessed during his own time at as a director of product management at Google, online platforms are largely not succeeding at achieving them. This, he said, is due to a “promote and police” approach:  Facebook, X/Twitter and other platforms tend to promote what they consider good content. “Algorithmic promotion and profit-driven environment has an unavoidable bias toward the sensational and the inflammatory because that’s what drives usage.”

Platforms balance this by policing “bad” content,” an approach that often backfires. “When AI tells you that your last comment broke the rules — or worse, that the harassment you reported was actually OK — the message conveyed is that the platform doesn’t trust you … It’s like they’re reviewing parts on an assembly line for defects.” Humans in turn will seek ways around the restrictions, while companies will try to close the loophole. “This leads to a surveillance nightmare. Everything you say or do online today is run through multiple systems looking for violations.” Such a vicious circle, he said, “is a recipe for bringing out the worst in human nature.”

The solution, he said, is a greater variety of social media options — something he said will occur when the big platforms bow to pressure and make it easier to migrate friends and content from one platform to another. “When I try to explain to my son that once upon a time you couldn’t take your phone number with you from Verizon to AT&T, he doesn’t believe me. My hope is that 20 years from now, having your friends and interests locked into one company’s server forever will seem like just a bad fairy tale. I hope that after today you will be convinced to demand more options and flexibility in your own online life.”

Among the day’s panels were demos of new media applications being developed at Harvard. Two members of Harvard’s Applied Social Media Lab, HLS Professor Lawrence Lessig and product manager Samantha Shireman, demonstrated America Talks, a deliberation platform that will facilitate constructive dialogue and decision-making. Berkman Klein Center cofounder Professor Charles Nesson ’63 and product manager Lara Schull introduced Nymspace, a discussion forum built on the idea that pseudonymity can foster more open and constructive discussion within closed group environments. As Nesson notes, it effectively brings the Chatham House rules (which deem that certain public remarks will not be attributed) into online discussion.

Protecting civic discourse

The day’s first panel addressed how to protect civic discourse in an age of fracture, offering both new strategies for online interaction and an overview of how we got here.

danah boyd, partner researcher at Microsoft Research and a visiting professor at Georgetown University, pointed out the limitations of online interaction. Social media was originally the province of “self-identified geeks, freaks and queers, and I identify as all three,” she said. Younger people were the next to come along, largely due to a need for personal connection. Yet her research showed that young people feel a particular need for a certain kind of relationship, one with “noncustodial adults” — mentors, coaches, pastors, aunts/uncles and others who provide support without having the authority of a parent.

Such relationships are unlikely to be found online, and they entirely disappeared from many young people’s lives during COVID, when they only interacted with parents at home and peers online. boyd recalled her work as a mental health counselor, doing suicide intervention during the pandemic. “One of the first things we asked was, ‘Who is the noncustodial adult that you can reach out to?’ And we heard, through COVID and to this day, ‘No one.’ That’s terrifying. So, it’s really important to think about the roles of these technologies … and where we need to repair in ways that have nothing to do with the technology.”

Moderator and Berkman Klein Center Faculty Director Jonathan Zittrain ’95 asked boyd if she was identifying a social problem rather than an online one. She agreed that the two are connected, but the offline social need may be more glaring. “School continues to be the primary site of bullying, no matter how much we think it’s happening online. Are there interventions that should happen around technology? Yes. But frankly, I would rather start upstream, not downstream.” While she said that AI chatbots have their uses, including providing teenagers with a sounding board, they can’t do crisis intervention. “At the end of the day, when somebody is in crisis, what they’re grateful for is that a human has spent time with them. That will never be replaced.”

Citing the annual panel event “Why I Changed My Mind,” Zittrain noted that people are most likely to change long-held beliefs because of real-life events, such as parenthood or loss, rather than something they’ve read in a newspaper or seen online. He then introduced Cornell University psychology professor Gordon Pennycook, who recently conducted an experiment to see if an AI chatbot could exert the same influence.

The experiment is detailed in Pennycook’s cover story in the latest issue of Science magazine, “Artificial Arguments.” It targeted believers in conspiracy theories, who were recruited through survey software and invited to a paid study. “We didn’t want to use the word conspiracy, but we said, ‘Just tell us something you believe in and give us the reasons why you do.’” (One example he named was a belief that 9/11 was an inside job). The AI bot would then present contrary evidence, and after three back-and-forth exchanges, the subjects would be asked if their beliefs had been affected. (The bot has been released online as debunkbot.com).

“After eight minutes of evidence-based conversation, we saw a massive decrease in people’ certainty about the conspiracy,” Pennycook said. (If they believed a conspiracy theory that was actually true, the bot would engage in the same way, but the contrary evidence proved less persuasive). “The effect is larger for beliefs that are implausible because the AI has better counter-evidence.”  Overall, he said, 20% of test subjects came away with different beliefs. “[Nearly] a quarter of the believers didn’t believe the conspiracy after the conversation, and this was evident even for people who believed the conspiracy at a 100 percent level or said it was important for their world views. So it actually did change people’s minds.”

The main takeaway, he said, is that evidence matters. “If you give people good evidence that is contrary to what they believe, will they listen to that, or are they motivated by their identities? Do they actually not care about the truth? But that’s not what we found. We saw effects in people who you would think were too far down the rabbit hole.”

During the Q&A session after the panel, Pennycook was asked if the same kind of bot might be used in more sinister ways — for instance, if Russia wanted to tamper with a U.S. election or if anyone wanted to spread misinformation. This he admitted was a possibility. “If you feed it with misinformation, people will be convinced by it. The key thing to take away is that we are responsive to the environment. Evidence is the thing that we experience — sometimes that’s good, sometimes that’s bad. You can use [this application] to get them out of the rabbit hole, or you can use it to get them in.”

Bringing together the themes of AI and human intervention, Deb Roy, professor of media arts and sciences at MIT, reported on a new social media app that is being developed there. The idea, he said, is to look beyond the Twitter and Facebook model where the point is “to build an audience and to basically have the modern-day version of broadcast reach. It’s not clear that’s the primary kind of social network we were [originally] seeking, when you talk about connection and community — giving everybody a bullhorn and a few learn to use it. That led us to the concept of a different kind of social network, where we take the concept of dialogue rather than media. That’s a concept that we have been envisioning and experimenting with. How this is going to take root, if at all, makes me excited and stressed.”

The goal of this new, as-yet-unnamed app is to introduce the healthy aspects of real-life conversation and small-group dialogue into social media. This new network was recently introduced at MIT’s inauguration week, first to students and then to staff and faculty. “It can happen over Zoom or with the app. The key is small groups with a facilitator and a structured conversation guide … It also matters who’s in the group, so we think about circles of trust. And the role of the facilitator is not to participate, but to actually create that space for the others. So, these are not new ideas, not new technologies, but ancient social technologies. Part two is to create a network. If it is an IRL [in real life] conversation, there’s a consented recording and the ability to lift out highlights and allow them to be heard by other small groups.” AI would then come into the picture, to make a database of relevant clips that future conversations could draw from. “It would contextualize the themes and make them available to the people who were involved and to the larger community.”

As Zittrain pointed out, deliberation by AI will never replace the human element, potential conflict and all. “I always equate it to the opening of the Super Bowl, if everyone has their ticket and the NFL commissioner comes out and says, ‘Great news, everybody — the teams have resolved their differences’.”


Want to stay up to date with Harvard Law Today? Sign up for our weekly newsletter.

This post was originally published on this site