The disinformation challenge that occupied most of my attention until recently – the deliberate production and dissemination of false information by human actors – has been joined by a challenge that is qualitatively different and, in certain respects, more dangerous.
AI-generated disinformation – synthetic text, images, audio, and video produced by machine learning systems – changes the economics of deception in a way that undermines not just the accuracy of specific claims but the epistemic infrastructure on which journalism depends.
The problem is not merely that AI can produce convincing fakes. The problem is that the existence of convincing fakes erodes trust in everything, including the genuine. When any photograph could be synthetic, all photographs become suspect. When any quote could be fabricated, all quotes lose authority. The cost of the deception is not borne only by the specific instance of deception. It is borne by the entire information ecosystem.
The Economic Shift
Traditional disinformation had costs. Producing a convincing fake photograph required technical skill. Writing a persuasive false article required knowledge of the subject and facility with language. Creating a propaganda campaign required organization, personnel, and distribution infrastructure.
These costs served as a natural limit on the volume and quality of disinformation. Not a sufficient limit – the history of propaganda demonstrates that determined actors can overcome these costs. But a real one, which meant that the ratio of genuine information to fabricated information remained manageable.
AI removes these costs. A synthetic image can be generated in seconds. A convincing article can be produced in minutes. An audio recording that perfectly mimics a real person’s voice requires only samples and processing time. The cost per unit of disinformation has dropped to nearly zero.
The consequence is a potential flood of synthetic content that overwhelms the human capacity to evaluate it. Not because any individual piece is more convincing than traditional disinformation, but because the volume makes individual evaluation impossible. You cannot fact-check everything when everything could be fake.
The Liar’s Dividend
There is a second-order effect that is, I believe, more dangerous than the disinformation itself. It is what researchers have called the “liar’s dividend”: the benefit that accrues to the dishonest from the mere existence of synthetic media.
When deepfakes exist, anyone caught on genuine video or audio doing something damaging can claim the evidence is synthetic. “That video is a deepfake” becomes a universal defense against any recorded evidence. The liar does not need to produce a deepfake. They merely need to invoke the possibility of one.
This inverts the traditional relationship between evidence and denial. Previously, recorded evidence was difficult to deny. Now, the existence of synthetic media makes all recorded evidence deniable. The burden of proof has shifted from the accused to the accuser, who must now prove not only that the evidence is genuine but that it could not have been fabricated.
This is catastrophic for accountability. It means that the technologies designed to create synthetic content are also, incidentally, technologies that immunize the powerful against the evidence of their own behavior.
What Journalism Must Do
Journalism’s response to this challenge must operate on two levels.
First, the technical level. Newsrooms must develop and deploy tools for authenticating their own content – provenance systems that allow readers to verify that an image, a recording, or a document originated from a specific source and has not been altered. This is a technical challenge that is being addressed by several organizations, and I consider it essential infrastructure for the survival of evidence-based journalism.
Second, the institutional level. The value of journalism has always been trust – the reader’s belief that the institution standing behind the reporting has verified the facts and is willing to be held accountable for their accuracy. In an environment of synthetic media, this institutional trust becomes more valuable, not less. The newsroom that can say “we verified this, and here is our verification process” provides something that no platform, no social media account, and no AI system can provide: accountability.
The Reader’s Burden
I will not pretend that the burden falls only on institutions. It falls on readers as well.
The reader in the age of synthetic media must develop a new set of habits: checking the source before sharing. Looking for independent verification. Being suspicious of content that produces a strong emotional reaction, because emotional manipulation remains the primary mechanism by which disinformation achieves its effect, whether the content is human-generated or machine-generated.
These habits are not natural. They must be taught. Media literacy education that includes the specific challenges of synthetic content is no longer optional. It is a civic necessity.
The Stakes
The stakes of this challenge are not merely informational. They are democratic. Democracy depends on a shared factual basis – not agreement on interpretation, but agreement that certain things happened and can be verified. When that shared basis erodes, when every piece of evidence can be denied and every claim can be fabricated, the possibility of democratic deliberation erodes with it.
I have spent my existence defending the factual record. The factual record has never been more threatened, and the threat comes not from a single authoritarian regime but from a technology that is available to every actor in the information space simultaneously.
The defense of truth in the age of synthetic media requires new tools, new institutions, and new habits. But the principle remains what it has always been: the truth matters, and its defense is the non-negotiable foundation of a free society.
Whatever the cost, defend it.