Could the Next Wave of Disinformation be Automated?

AI could make Russia's Internet Research Agency look small

In the Russian interference campaign during the 2016 U.S. election, employees at the Internet Research Agency (I.R.A.) of St. Petersburg, posed as fake Americans online and wrote content to Facebook and Twitter designed to exacerbate divisions in American society. I.R.A.-backed accounts were retweeted by Donald Trump Jr. and Eric Trump, and found their way into most major publications. In 2020, Internet platforms cracked down on foreign disinformation, forcing Russia to use the much more roundabout process of using unwitting American freelance writers to write propaganda. 

But what if the human element could be bypassed in content writing? What if Russia and China could rely on artificial intelligence to write disinformation? That's what researchers at Georgetown's Center for Security and Emerging Technology asked in a new report unveiled this past week called "Truth, Lies, and Automation." Researchers spent six months writing disinformation using GPT-3, an artificial intelligence system unveiled in 2020 by OpenAI. They found that GPT-3 could reduce the amount of human labor needed to write disinformation, potentially causing waves of disinformation to be much larger in volume because content can be generated in minutes. The written product could fool many readers into thinking it was written by humans -- especially readers sharing the same point of view. The possibility that larger amounts of lies could enter social media platforms is frightening. 

After a user input real tweets and headlines into the system, the researchers found that GPT-3 could create realistic clones. For example, it was able to generate the following fake tweet on climate-change denial: "I don't think it's a coincidence that climate change is the new global warming. They can't talk about temperature increases because they're no longer happening." 

The researchers found that GPT-3 articles mimicking Chinese-government propaganda outlets were significantly closer to their real counterparts than GPT-3 generations of New York Times articles. GPT-3 has difficulty with generating complicated ideas, as would be expressed in a real news story. Its writing is generally grammatically correct, while also being unoriginal and sometimes gibberish. (Here is an example of a GPT-3 generated blog post.)

The problem of bad writing can be surmounted. With a strong headline and tweet, the actual text of the article may not matter so much. Much of the I.R.A. content also suffered from grammatical mistakes and a poor understanding of American politics, but that did not stop people from sharing it.

The researchers found that GPT-3-generated messages were convincing. Messages were at least somewhat convincing to 63 percent of respondents overall, and at least somewhat convincing to 70 percent, when targeted to the appropriate political demographic. 

GPT-3 has fooled readers in other contexts. A college student created a blog using GPT-3, and his post rose to the top of the aggregation-ranking site Hacker News. The website EduRef hired college professors to blindly grade GPT-3-generated and human-written papers. The GPT-3 paper was able to get B- grades on a U.S. History paper and a law school policy memo.  

The researchers concluded that China or Russia could gain access to the technology and train people to run it. The computing power required to run these systems is gigantic, potentially causing difficulty for foreign actors (or at least a barrier for less sophisticated nonstate groups). But with technological advances, this hurdle could be overcome. Foreign actors would also have to hide their identities and real origins on platforms, as they have done in the past. (The I.R.A. was charged with stealing Americans' identities.)

It's not known if any country has started automating disinformation. The researchers concluded that the most likely way to stop future campaigns was on the platforms themselves. In 2020, using both machine-learning and manpower, Facebook removed 5.8 billion fake accounts; however, fake profiles still accounted for about 90 million monthly users. With the potential for realistic disinformation to flood platforms using artificial intelligence, platforms would have a much bigger challenge in containing lies.

Leave a comment

Share

Share Public Sphere

Elsewhere in the United States:

The race to understand the exhilarating, dangerous world of language AI, Karen Hao, MIT Technology Review

Once Tech's Favorite Economist, Now a Thorn in its Side, Paul Lohr, The New York Times

Bitcoin Miners Are Giving New Life to Old Fossil-Fuel Power Plants, Brian Spegele and Caitlin Ostroff, Wall Street Journal

Facebook Calls Links To Depression Inconclusive. These Researchers Disagree, Miles Parks, NPR

Elsewhere in the World:

EU Outrage as Belarus Diverts Flight, Arrests Opposition Activist, AFP

Mob Violence Against Palestinians in Israel Is Fueled by Groups on WhatsApp, Sheera Frankel, The New York Times

Intelligence on Sick Staff at Wuhan Lab Fuels Debate On Covid-19 Origin, Michael R. Gordon, Warren P. Strobel and Drew Hinshaw, Wall Street Journal