In an age where synthetic realities are indistinguishable from truth, and deepfakes spread faster than fact-checkers can blink, Simon Chesterman’s recent article “Lawful but Awful: Evolving Legislative Responses to Address Online Misinformation, Disinformation, and Mal-Information in the Age of Generative AI” (2025) lands with the urgency of a fire alarm in a room full of sleeping regulators.
Chesterman doesn’t merely describe the explosion of false information online—he anatomizes it. His starting point is disarmingly simple: falsehood has always been with us. From Ramesses II’s inflated war reports to Cold War propaganda, misinformation is as old as language itself. What has changed is the scale, the speed, and—most dramatically—the agency. Enter generative AI.
The law is catching up… but to what?
Drawing on a novel dataset of 151 national laws passed since 1995, Chesterman offers a panoramic overview of how states—liberal and illiberal alike—are struggling to regulate an information ecosystem that increasingly eludes human control. Laws targeting online falsehoods have tripled in number since 2016. Initially, these legislative responses emerged in countries with weaker civil liberties and lower GDP per capita—places where national security concerns often trump free speech. But the most rapid recent growth is now occurring in Western democracies.
What makes this legislative trend so intriguing—and so fraught—is the type of content it aims to regulate: not the obviously illegal, but the disturbingly legal. Hence, the title: lawful but awful. Much of the online material targeted by new laws is technically protected speech, but potentially devastating in effect—content that sows doubt, fuels violence, or sabotages democratic trust.
Mapping the problem: from falsehood to harm
Chesterman wisely introduces a taxonomy that distinguishes between:
- Misinformation: False but non-malicious content (e.g., urban legends).
- Disinformation: Deliberately deceptive content designed to manipulate.
- Mal-information: True but harmful content, like doxxing or revenge porn.
Each category raises distinct regulatory challenges, but all are made exponentially worse by generative AI, which enables bad actors to automate and personalize deception at scale. Deepfakes, synthetic voices, AI-generated political ads—all of these tools turn lies into aesthetic artifacts, capable of bypassing even the most skeptical human filters.
Who is being harmed?
The article doesn’t settle for abstract dangers. Chesterman identifies three concentric zones of vulnerability:
- Individuals, especially children and the elderly, who are susceptible to scams, harassment, or manipulated media (e.g., “AI voice cloning” to simulate family members in distress).
- Public institutions, including electoral systems and judicial authorities, whose legitimacy depends on a baseline of epistemic trust.
- Society at large, which faces the risk of a “liar’s dividend”—a toxic condition in which nothing is believed anymore, and truth becomes a matter of allegiance rather than evidence.
The governance dilemma
What should be done? Chesterman is no alarmist, but he’s clear-eyed about the policy dilemma. His article outlines a regulatory “toolbox” that includes:
- Rules for the production of synthetic content (e.g., prohibiting deepfake porn, regulating AI tools).
- Liability for distribution, especially through platforms and intermediaries.
- Resilience-focused policies, including digital literacy, media education, and self-regulation frameworks.
But here lies the rub: most regulatory regimes are reactive, fragmented, and deeply embedded in national legal cultures. The U.S. leans heavily on First Amendment protections; Singapore adopts a state-centric model through its POFMA law; the EU pushes for systemic transparency via the Digital Services Act. No single model fits all, and yet disinformation knows no borders.
The AI catalyst
Perhaps the most compelling aspect of the article is how it links the epistemic crisis of the digital age with the advent of generative AI. As chatbots like ChatGPT, Claude, or Gemini replace search engines for millions, we’re entering a new phase of digital mediation—one where answers, not queries, shape public perception. In this context, AI doesn’t just spread falsehoods—it structures plausibility.
Chesterman flags the risk of algorithmic sycophancy, where AI tools tell us what we want to hear, reinforcing bias under the guise of helpfulness. This new landscape requires not only legal reform, but a philosophical reckoning with how truth is produced, evaluated, and shared in the age of machine cognition.
Conclusion: law Is necessary, but not sufficient
“Lawful but Awful” is a tour de force—not because it offers easy answers, but because it asks the right questions. Chesterman invites us to recognize that the real battle is not just against lies, but against the erosion of shared reality. He does not propose banning AI, nor does he romanticize free speech absolutism. Instead, he calls for a recalibration of how we balance freedom, security, and epistemic integrity in a world where the line between real and artificial is fading fast.
In short: if you’re working on digital governance, platform accountability, AI policy, or the future of democracy, this is not just an article to read—it’s a map to navigate the storm ahead.