AI-Generated Text Is the Scariest Deepfake of All

Synthetic video and audio seemed pretty bad. Synthetic writing—ubiquitous and undetectable—will be far worse.
Photo-Illustration: Sam Whitney; Getty Images

When pundits and researchers tried to guess what sort of manipulation campaigns might threaten the 2018 and 2020 elections, misleading AI-generated videos often topped the list. Though the tech was still emerging, its potential for abuse was so alarming that tech companies and academic labs prioritized working on, and funding, methods of detection. Social platforms developed special policies for posts containing “synthetic and manipulated media,” in hopes of striking the right balance between preserving free expression and deterring viral lies. But now, with about three months to go until November 3, that wave of deepfaked moving images seems never to have broken. Instead, another form of AI-generated media is making headlines, one that is harder to detect and yet much more likely to become a pervasive force on the internet: deepfake text.

Last month brought the introduction of GPT-3, the next frontier of generative writing: an AI that can produce shockingly human-sounding (if at times surreal) sentences. As its output becomes ever more difficult to distinguish from text produced by humans, one can imagine a future in which the vast majority of the written content we see on the internet is produced by machines. If this were to happen, how would it change the way we react to the content that surrounds us?

This wouldn't be the first such media inflection point where our sense of what's real shifted all at once. When Photoshop, After Effects, and other image-editing and CGI tools began to emerge three decades ago, the transformative potential of these tools for artistic endeavors—as well as their impact on our perception of the world—was immediately recognized. “Adobe Photoshop is easily the most life-changing program in publishing history,” declared a Macworld article from 2000, announcing the launch of Photoshop 6.0. “Today, fine artists add finishing touches by Photoshopping their artwork, and pornographers would have nothing to offer except reality if they didn't Photoshop every one of their graphics.”

We came to accept that technology for what it was and developed a healthy skepticism. Very few people today believe that an airbrushed magazine cover shows the model as they really are. (In fact, it’s often un-Photoshopped content that attracts public attention.) And yet, we don’t fully disbelieve such photos, either: While there are occasional heated debates about the impact of normalizing airbrushing—or more relevant today, filtering—we still trust that photos show a real person captured at a specific moment in time. We understand that each picture is rooted in reality.

Generated media, such as deepfaked video or GPT-3 output, is different. If used maliciously, there is no unaltered original, no raw material that could be produced as a basis for comparison or evidence for a fact-check. In the early 2000s, it was easy to dissect pre-vs-post photos of celebrities and discuss whether the latter created unrealistic ideals of perfection. In 2020, we confront increasingly plausible celebrity face-swaps on porn, and clips in which world leaders say things they’ve never said before. We will have to adjust, and adapt, to a new level of unreality. Even social media platforms recognize this distinction; their deepfake moderation policies distinguish between media content that is synthetic and that which is merely “modified”.

To moderate deepfaked content, though, you have to know it’s there. Out of all the forms that now exist, video may turn out to be the easiest to detect. Videos created by AI often have digital tells where the output falls into the uncanny valley: “soft biometrics” such as a person’s facial movements are off; an earring or some teeth are poorly rendered; or a person’s heartbeat, detectable through subtle shifts in coloring, is not present. Many of these giveaways can be overcome with software tweaks. In 2018’s deepfake videos, for instance, the subjects’ blinking was often wrong; but shortly after this discovery was published, the issue was fixed. Generated audio can be more subtle—no visuals, so fewer opportunities for mistakes—but promising research efforts are underway to suss those out as well. The war between fakers and authenticators will continue in perpetuity.

Perhaps most important, the public is increasingly aware of the technology. In fact, that knowledge may ultimately pose a different kind of risk, related to and yet distinct from the generated audio and videos themselves: Politicians will now be able to dismiss real, scandalous videos as artificial constructs simply by saying, “That’s a deepfake!” In one early example of this, from late 2017, the US president’s more passionate online surrogates suggested (long after the election) that the leaked Access Hollywoodgrab 'em” tape could have been generated by a synthetic-voice product named Adobe Voco.

But synthetic text—particularly of the kind that’s now being produced—presents a more challenging frontier. It will be easy to generate in high volume, and with fewer tells to enable detection. Rather than being deployed at sensitive moments in order to create a mini scandal or an October Surprise, as might be the case for synthetic video or audio, textfakes could instead be used in bulk, to stitch a blanket of pervasive lies. As anyone who has followed a heated Twitter hashtag can attest, activists and marketers alike recognize the value of dominating what’s known as “share of voice”: Seeing a lot of people express the same point of view, often at the same time or in the same place, can convince observers that everyone feels a certain way, regardless of whether the people speaking are truly representative—or even real. In psychology, this is called the majority illusion. As the time and effort required to produce commentary drops, it will be possible to produce vast quantities of AI-generated content on any topic imaginable. Indeed, it’s possible that we’ll soon have algorithms reading the web, forming “opinions,” and then publishing their own responses. This boundless corpus of new content and comments, largely manufactured by machines, might then be processed by other machines, leading to a feedback loop that would significantly alter our information ecosystem.

Right now, it’s possible to detect repetitive or recycled comments that use the same snippets of text in order to flood a comment section, game a Twitter hashtag, or persuade audiences via Facebook posts. This tactic has been observed in a range of past manipulation campaigns, including those targeting US government calls for public comment on topics such as payday lending and the FCC’s network-neutrality policy. A Wall Street Journal analysis of some of these cases spotted hundreds of thousands of suspicious contributions, identified as such because they contained repeated, long sentences that were unlikely to have been composed spontaneously by different people. If these comments had been generated independently—by an AI, for instance—these manipulation campaigns would have been much harder to smoke out.

In the future, deepfake videos and audiofakes may well be used to create distinct, sensational moments that commandeer a press cycle, or to distract from some other, more organic scandal. But undetectable textfakes—masked as regular chatter on Twitter, Facebook, Reddit, and the like—have the potential to be far more subtle, far more prevalent, and far more sinister. The ability to manufacture a majority opinion, or create a fake-commenter arms race—with minimal potential for detection—would enable sophisticated, extensive influence campaigns. Pervasive generated text has the potential to warp our social communication ecosystem: algorithmically generated content receives algorithmically generated responses, which feeds into algorithmically mediated curation systems that surface information based on engagement.

Our trust in each other is fragmenting, and polarization is increasingly prevalent. As synthetic media of all types—text, video, photo, and audio—increases in prevalence, and as detection becomes more of a challenge, we will find it increasingly difficult to trust the content that we see. It may not be so simple to adapt, as we did to Photoshop, by using social pressure to moderate the extent of these tools’ use and accepting that the media surrounding us is not quite as it seems. This time around, we’ll also have to learn to be much more critical consumers of online content, evaluating the substance on its merits rather than its prevalence.

Photograph: Jabin Botsford/The Washington Post/Getty Images