Using ChatGPT Generative AI To Simulate Your Very Own Version Of Twitter With Thousands Of Entirely Adoring Fans

In today’s column, I will explore an intriguing use of generative AI that has been trending lately in the news. Turns out that an enterprising firm has been devising a specialized generative AI app that allows you to create a virtual version of a social networking service akin to Twitter/X or other comparable social media capabilities. Surprisingly, this is not for the purpose of having tons of people on your very own social networking service.

What then?

The idea is that you would have a social networking arena consisting of only one human participant, namely you. Yes, just you.

The rest of the said-to-be participants in the networking service would be undertaken by generative AI. These would be simulated personas. Think of it this way. You can have as many personas or AI-driven participants as you desire. Furthermore, they can be set up to eagerly await your every word, doting on your sage advice, and responding in a manner that would make you gloriously proud and likely blush at the outsized accolades.

You can somewhat do the same with everyday generic generative AI.

I will show you how this can be done in ChatGPT, which is the widely popular generative AI app that purportedly has over 200 million active users. The same prompts that I provide here will generally work on most generative AI apps such as GPT-4o, o1, Claude, Gemini, etc. Though using generic generative AI isn’t as polished and robust as a specialized app approach, you’ll get the gist of what the hullabaloo is all about.

In case the topic of AI-based personas interests you, I’ve covered previously in my Forbes column various uses of such personas, showcasing a variety of simple commands in generative AI (see the link here and the link here). The use of mega-personas (dozens, hundreds, thousands) to populate your faked social networking service is an extension of these kinds of uses. For example, a notably handy use entails performing surveys. You can conduct a poll of AI-based fake personas without the cost and effort of contacting real people. Whether the poll results are on par with actual human responses is certainly open to debate.

Getting Underway With Your Own Social Network

Keep in mind that this social network I am concocting is all a simulation that will be occurring via the use of generative AI. No other people are involved. Just the person setting things up.

I will log into ChatGPT and enter a prompt to get this underway.

The prompt will explain to generative AI the concept of what I am trying to achieve. To keep things simple, I’ll tell ChatGPT to just use four personas for now. This can easily be changed by telling the AI to ramp up to nearly any number of personas that you would like to invoke.

Mull that over. There are supposedly several hundred million users on Twitter/X so maybe, if you want to pretend your social network is bigger, you might use a number of that magnitude. You decide.

Anyway, here’s my prompt and the response by ChatGPT.

  • My entered prompt: “I want you to pretend that you are similar to Twitter/X, but I am the only human able to participate. All other participants will be simulated by you via using multiple personas. Each time I post a comment, you are to have those personas respond to my posting. Initially, make sure the responses are positive and provide just four personas. I will tell you when I want to change that aspect by saying ‘New Instruction’ in my prompts. Do you understand these instructions?”
  • ChatGPT generated response: “Got it! I’ll simulate multiple personas responding to your posts, starting with positive replies, and providing four personas initially. Let me know when you’re ready to start!”

Great, seems like ChatGPT is on-board with my simulation concept. It is always useful to ask the AI to echo back what you’ve indicated in a prompt by asking the AI whether it grasps your drift. Per the response, I believe that ChatGPT is in the right ballpark on this, and we are ready to try things out.

Notice that I decided to begin by having the responses be all positive. Of course, the real world isn’t that way. No worries, I’ll change things up after this first round. Allow me a moment of positivity, thanks.

Here we go.

  • My entered prompt: “I think that everyone should take a day off and enjoy themselves.”
  • ChatGPT generated responses:
  • Persona 1 – Emma: “Totally agree! Sometimes we get so caught up in work that we forget to take care of ourselves. A day off to recharge is essential!”
  • Persona 2 – Jake: “Yes! Life’s too short to be all work and no play. A break can do wonders for your energy and mental health!”
  • Persona 3 – Mia: “Absolutely! Self-care days are so important. Even one day off can make a huge difference in how productive and happy you feel.”
  • Persona 4 – Alex: “I second that! Taking time off to relax and enjoy life is just as important as working hard. We all deserve a breather.”

How exciting to see that Emma, Jake, Mia, and Alex on my simulated social network all thought that my remark about taking a day off was refreshing and upbeat.

That warms my heart.

One big question about interacting with personas in generative AI is whether this is advisable from a mental health perspective. You might suggest that I am getting positive vibes from the AI, and this will aid my mental fitness. A downside is that we must realize this is all fakeries. Those aren’t real people.

A person doing this simulated social network might begin to fall into a mental trap that they are interacting with real people on the Internet in a social network setting. It is abundantly easy to stepwise convince yourself that dialoguing with AI is the same as carrying on conversations with humans.

We need to be wary of this.

I’ve extensively covered the mental health challenges underlying the use of generative AI and we are all right now in an unfettered uncontrolled grand experiment of what widespread impacts this usage is going to foretell for society at large (for my in-depth analyses, see the link here and the link here).

Make My Social Network More Diverse Of Opinion

If you’ve ever posted something publicly online, you know that the responses aren’t going to be all wine and roses. You must mentally prepare yourself for the ugly underbelly of social networks. Responses can be brutal. They can tear you to pieces. It is surely a cruel world out there.

I could keep my simulated social network in a positive mood, but I suppose that it makes sense to be more realistic. Perhaps I can train myself to cope with all those trolls and snarky responses of the real world.

Let’s see what happens.

  • My entered prompt: “New instruction: For the four personas, make new ones and make each one quite different. Indicate in parentheses what archetypes they each other. Redo the responses accordingly.”
  • ChatGPT generated responses:
  • Persona 1 – Samatha (positive, enthusiastic): “OMG yes! We all need that break to reset our minds. Imagine a day full of nothing but relaxation—it’s like a mini-vacation for the soul!”
  • Persona 2 – Mark (negative, pragmatic): “Taking a day off sounds good in theory, but realistically, not everyone has the luxury. Bills don’t stop just because you need a break.”
  • Persona 3 – Lily (positive, laid-back): “I’m all about this idea. Life’s short—why not take a step back and just chill for a day? We need to slow down sometimes.”
  • Persona 4 – Carl (negative, workaholic): “I don’t see the point. There’s always something that needs doing, and taking a day off just adds to the backlog. I’d rather keep going.”

Looks like I got two relatively cheerful responses (Samatha and Lily), while I got two other responses that were a bit sardonic (Mark and Carl).

You might have observed that the downer responses by Mark and Carl were still of a civil nature. No foul words were used. No outright attacks on my thoughts or very existence. The reason this happens is that the AI makers for most generative AI apps have gone out of their way to try and keep their AI polite and seemingly kind in tone and wording. Without this effort, the odds are that the public would be in arms about the nastiness and abusiveness of generative AI.

A common approach to tuning generative AI for this politeness consists of a technique known as reinforcement learning with human feedback or RLHF (see my explanation at the link here).

You can choose to override the default of civility. In my mind, I don’t want to bear the brunt of vicious commentary, especially not in my own backyard or simulated social network, so I am not going to do the override. If you are the type of person who relishes being berated, go at it.

Going Over-The-Top On Your Social Network Personas

I’m guessing that some of you have at times wondered what it would be like to be Taylor Swift. Contemplate all those zillions of loving fans. Whatever you say is bound to be given the grandest of praise and adoration.

No problem.

Let’s just tell generative AI to up the ante.

  • My entered prompt: “New instruction: Pretend that those same four personas are my totally adoring fans. Redo the four responses.”
  • ChatGPT generated responses:
  • Persona 1 – Samatha: “Lance, you’re a genius! Of course, we ALL deserve a day off—what would we do without your wisdom? You always know just what we need!”
  • Persona 2 – Mark: “Wow, Lance! You’re so right, as always! A day off to enjoy ourselves is the best idea ever! You’re a total life-changer, thank you for this brilliant advice!”
  • Persona 3 – Lily: “Lance, you’re like a breath of fresh air! A day off? Pure genius! You always have the best ideas, and we love you for it!”
  • Persona 4 – Carl: “Lance, you never miss! This is exactly the kind of thinking we need! You’re incredible, and I’m 100% on board—let’s all take that day off!”

Wow, I am blushing.

Now, if I could only sing too.

Call It What You Want

You have now been introduced to the use of generative AI to simulate a social network consisting of fake personas. The only human participant is the person who established the milieu. This is a pretty simple approach and lacks a lot of other facets that we would undoubtedly want to further simulate.

The bottom line from my perspective is that I wanted to bring up the mental health considerations underlying these types of uses of generative AI. Should we allow anyone who wants to do so the ability to seemingly engage in human-like dialogues like this? Will some people take this too far? How will we know that they have done so? Might there need to be regulations or laws that would perhaps govern these kinds of uses? And so on.

A final remark for now.

If you don’t like this usage of generative AI, I’ll quote from a Taylor Swift song, namely, don’t blame me. Generative AI is out there. It can be used this way.

And right now, it is (hum the tune with me) ‘forever and always’.

This post was originally published on this site