Photo Credit: pexels
By now, you’ve probably heard of all the potential uses of ChatGPT, the AI-powered chatbot that burst into the public consciousness back in November. With just a few words of text prompting, ChatGPT can write entire essays, compose poems, and even write computer code. But here’s what people are not telling you about ChatGPT: It can be used to generate and spread propaganda and other forms of misinformation at an unprecedented scale.
The dark side of ChatGPT nobody wants to talk about
Remember the infamous “Russian bots” that took over social media during the 2016 presidential election? As we’re told, massive bot farms in Russia were generating and then posting misinformation designed to trick American voters. These Russian bots were supposedly capable of taking over social media feeds, and influencing the way we think about American democracy.
Well, the new era of ChatGPT is going to make that experience look like a quaint little science experiment. That’s because ChatGPT is capable of generating much more pernicious content at near-zero cost. Moreover, this content will be so authentically “human” that you could never tell it was written by a machine. That’s genuinely scary, because it will make it very easy for the propagandists to tailor micro-campaigns for specific demographic groups, rather than relying on mass-scale copying and pasting of the same message, over and over again.
Misinformation on a global, unprecedented scale
This might sound alarmist, but a lot of very smart academics are making this point, and they are very concerned about where all this is headed. A team of researchers from Stanford, Georgetown and OpenAI (creators of ChatGPT) collaborated on a report detailing the potential influence operations that ChatGPT could one day help to coordinate. These bot campaigns could be used to deflect criticism from unpopular leaders. They could be used to lobby for or against specific policies. And they could be used to get certain topics to trend in the popular consciousness.
The real selling point here, say the authors of this report, is just how low-cost and high-quality all this is. You don’t need to hire an army of professional content creators, all you need is access to a single ChatGPT bot. Eventually, these ChatGPT bots will become so intelligent that they will be able to design influence tactics that we silly humans could never possibly even imagine. It would be like humans playing checkers while the AI bots are playing chess. Whenever we see content online, we won’t know if it was written by a human or not.
What can Big Tech do?
Ultimately, these ChatGPT bots have the potential to completely swamp anything the big social networks can do to protect against it. Propagandists like to talk about “flooding the zone” – introducing so much content on the same topic in so many different ways that it’s impossible to ignore. Well, having unfettered access to AI-generated content is going to become the ultimate form of flooding the zone.
In the future, you might be able to tweak your text prompts to such a degree that you could write something like, “Write an unflattering review of the current candidates for president in such a way that it can avoid the censors at Facebook, even if it means making up facts or events.” That part about “making up facts” is not so far off the mark, either. Researchers have already found that ChatGPT has the ability to make up facts when told to write authoritative articles.
Given the ingenuity of this new AI, maybe it’s not the job of Facebook or Twitter or any other social media company to make sure that they filter out all these AI chatbots. Maybe it’s the job of the companies actually creating these bots. Only they really know the dangers of this new technology, and how it can be used.
Again, this is not to say that all AI is bad or dangerous. But keep in mind that social media is poised to become a new AI battlefield. And, in this new fog of war, just be careful who you listen to, and why. It’s getting harder and harder to tell who’s on your side.