Synthetic Media: The New Frontier of Political Manipulation

By Michelle Xin, LAW ‘25

Your phone rings: an unknown number flashes on the screen.

You answer and hear, “what a bunch of malarkey,” a signature phrase of President Joe Biden. The automated message urges you to “save your vote,” falsely claiming that voting in the upcoming 2024 Presidential primary would preclude you from voting in November’s general election. Could this be real? Did the Biden Administration and Campaign authorize such a message?

The answer is no – it was a robocall impersonating President Joe Biden, targeting over 20,000 New Hampshire voters in what the New Hampshire Attorney General called an “‘unlawful attempt’ at voter suppression.” The robocall aimed to discourage voters from participating in the state’s Democratic primary, which determines the party’s presidential candidate. This occurred before President Biden exited the presidential race and Vice President Kamala Harris launched her campaign, but the underlying concerns about voter manipulation and misinformation remain regardless of who is at the top of the ticket.

Robocalls like this fall under the broader category of synthetic media—audio, video, and images generated or altered by AI. Recent advances in AI technology have dramatically increased the sophistication of synthetic media, and its misuse poses a significant threat to the integrity of democratic elections.

In the landmark case Buckley v. Valeo, the Supreme Court stressed that in a republic, where citizens are sovereign, the ability to make informed choices among candidates is crucial. Misinformation can skew public perception and affect a voter’s choice, undermining both the electoral process and the legitimacy of its outcomes.

Synthetic media can be especially dangerous because it is highly realistic— capable of making individuals appear to say or do things they never did. AI-generated content can also spread quickly and easily through social media platforms like X (formerly Twitter), TikTok, Instagram, and YouTube, reaching a large audience in no time. In August 2024, for example, former President and current presidential candidate Donald Trump posted an AI-generated image on Truth Social, falsely suggesting that American singer-songwriter Taylor Swift endorsed him for president. The manipulated image portrayed Swift as Uncle Sam, promoting Trump’s campaign. Swift’s voice and image carry significant weight in political and social discourse. Her fan base comprises of millions of people from diverse age groups and social backgrounds, which gives her political endorsements the ability to influence a significant number of voters. While younger Swift fans may have the digital literacy to recognize AI-generated content, older generations or those less familiar with AI manipulation may be more easily deceived, especially when the media appears highly realistic and comes from trusted platforms. This makes a false depiction of Swift supporting a candidate particularly dangerous, as it could lead some to mistakenly believe the endorsement is genuine. Swift later publicly endorsed Kamala Harris and Tim Walz, denouncing Trump’s use of AI to create a fake endorsement.

In addition to social media, traditional channels like landlines and TV commercials—long used for political campaigning—are also vehicles for synthetic media manipulation. The Biden robocall, delivered through familiar telecommunications platforms, further demonstrates how easily voters can be misled by AI-generated misinformation, especially when it appears through trusted sources.

Striking the right balance: safeguarding elections without endangering free expression

As synthetic media evolves and becomes more prevalent, the need for regulation becomes increasingly urgent. Deepfakes and other AI-generated content can undermine voter understanding and decision-making, leading to election interference. However, any regulations must be thoughtfully crafted to uphold constitutional rights, particularly freedom of expression. Policymakers must accomplish their objective of protecting the integrity of democratic elections without endangering free expression.

In the United States, the First Amendment protects a wide range of expressive activity, including satire. Satire often uses exaggeration and distortion to critique society(PDF), politics, and individuals, often blurring the line between truth and outrageousness. Even if a form of expression is purely deceptive and lacks artistic or lawful purpose, it still generally receives constitutional protection and cannot be prohibited simply for being misleading. Synthetic media is a form of expression that can be classified as satire, and the robust protections of the First Amendment often shield it, even when it causes harm. This raises the question: whose images or voices can be mimicked without consent and still receive First Amendment protection?

Public figures face a unique challenge when relying on defamation law to combat the use of their voice or likeness in synthetic media. Unlike private individuals, public figures like politicians must prove “actual malice” in defamation lawsuits — that false statements were made knowingly or with reckless disregard for the truth. Even in clear-cut cases, the adjudication of such lawsuits takes time and plaintiffs who are public figures face the same First Amendment protections(PDF) in place to encourage open debate on public issues and shield satire. For any plaintiff or regulator the evolution of media environments poses an additional hurdle, as current defamation laws and protections were developed for traditional media, where authorship and intent are more straightforward. AI-generated content and its circulation across more dominant alternative media and communications platforms blurs these lines, often making it difficult to apply existing legal standards to synthetic media products.

Despite and indeed because of these complexities, the need for new regulation tackling the pernicious effects of synthetic media in the realm of democratic practice and election integrity is clear. Many proposals, such as those from the Brennan Center for Justice and FIRE, call for narrowly tailored laws that provide clear justifications for restrictions of free expression and limit restrictions on speech to what is strictly necessary. California’s recent legislation on synthetic media, particularly deepfakes, highlights the challenges of doing so.

In September 2024, California Governor Gavin Newsom signed three bills aimed at addressing election deepfakes ahead of the upcoming election. Two of these laws, AB 2655 and AB 2839, are currently being challenged in court; AB 2839, took effect immediately upon signing and allows individuals to sue for damages related to election deepfakes, while AB 2655 mandates the removal of deceptive content from major online platforms starting next year. Christopher Kohls, who created a viral deepfake video of sitting Vice President and Democratic presidential nominee Kamala Harris saying that she is the “diversity hire,” has sued to block these new laws, claiming that they infringe on free speech and enable anyone to take legal action against content they dislike. In response, the Governor’s office maintains that the laws do not ban satire or parody; rather, they merely require the disclosure of AI use in altered videos or images to prevent the erosion of public trust in U.S. elections amid a “fraught political climate.”

While the government’s motives to regulate synthetic media, especially deepfakes, include compelling aims like protecting democratic integrity and promoting an informed electorate, the judiciary’s response highlights the complexity of these issues. On October 2, 2024, U.S. District Court Judge John A. Mendez ordered a preliminary injunction to block California’s AB 2839, which targets distributors of AI deepfakes on social media. The plaintiff argued that AB 2839 violated both the First and Fourteenth Amendments, both facially and as applied.

In granting the injunction, Judge Mendez found that the defendants would likely succeed in their facial challenge to the constitutionality of AB 2839, a sweeping outcome since facial challenges are among the most difficult to mount successfully; challengers must prove that there are no circumstances under which the law could be valid. In his ruling, Judge Mendez highlighted the necessity of protecting artistic expression and political discourse, even in the face of misinformation concerns. His decision warns against overly broad regulations that could stifle legitimate speech, including satire and political commentary.

Judge Mendez’s ruling highlights the challenges of balancing democratic integrity with the protection of free expression. Rather than merely a roadblock to appropriate regulation, this decision is a step towards navigating the complexities of regulating digital content. As technology continues to evolve, legislators and regulators must weigh how to establish a legal framework that addresses the most severe harms posed by synthetic media and reinforces the foundational principles of democracy, all while grappling with an abundant veil of protection afforded to free speech under federal law and judicial interpretation.

Can a Federal Right of Publicity Law mitigate these challenges?

In response to the commercialization of AI generated deepfakes and the technology’s propensity for misrepresentation of depicted individuals, some Members of Congress have proposed the creation of a federal “right of publicity” (ROP)(PDF). Currently, a ROP is a state-based property right that protects an individual’s name, likeness, and other aspects of their identity from being used for commercial purposes without their permission. As such, the ROP varies widely by state, with thirty-seven states recognize the right by statute or common law. This variability has complicated enforcement and limited states’ ability to uniformly address the interstate nature of digital content distribution. Harmonizing these differences into a federal ROP could curtail deepfake misinformation by enabling individuals to assert control over their likenesses and deter their misuse in misleading or harmful political contexts.

As the commercial use of deepfakes across digital platforms and media increases, the need for regulation to prevent the exploitation of individuals depicted in these media grows more urgent. While deepfakes may not typically fall under “commercial” content, they can be widely shared online and generate significant revenue. Platforms like YouTube, TikTok, and Facebook incentivize high-traffic content through monetization models, allowing creators to profit from ads, sponsorships, and subscriber donations when deepfake videos go viral. As a result, deepfakes can exploit an individual’s likeness for commercial purposes without obtaining proper authorization.

Existing laws are inadequate to effectively compel the removal of deepfakes. The Federal Trade Commission (FTC) regulates commercial practices through its “Truth in Advertising” standard, but its authority is limited to cases involving deceptive or fraudulent activities. If a creator claims that their deepfake content is parody or satire, this assertion may exempt the content from FTC oversight. Similarly, the Federal Election Commission (FEC) has initiated efforts to regulate AI- generated deepfakes in political ads(PDF). However, this regulation would only focus on deepfakes used in campaign advertising, leaving the broader threats of misinformation and misrepresentation that deepfakes pose to the electoral process unaddressed. A federal right of publicity would comprehensively address these issues by encompassing all forms of unauthorized commercial use of an individual’s likeness.

While an inherent tension exists between the right of publicity and the First Amendment’s robust protection of the freedom of expression, especially regarding satire or parody, the Supreme Courts’ Jack Daniels decision indicates a shift toward balancing the need to protect individuals from deception with the preservation of free speech.

In September 2021, the Third Circuit Court of Appeals rejected a Section 230 defense in a state- law ROP case brought by Philadelphia news anchor Karen Hepp against a dating service and Facebook for unauthorized use of her likeness in ads. The opinion split from the Ninth Circuit on whether ROP claims fall within the Section 230’s exemption for intellectual property claims. The outcome suggests that a federal ROP approach that defines a cause of action as falling within the Section 230 intellectual property exemption might also overcome enduring questions about the scope of this infamous liability shield. In short, a federal right of publicity statute could be designed to address only misleading or harmful deepfakes, while still allowing for legitimate political commentary and satire, and averting First Amendment and Section 230 challenges that have plagued other legislation, enforcement, and plaintiff-side legal actions.

One approach to establishing a federal right of publicity (ROP) is to preempt existing state laws, thereby promoting a more uniform standard across the country. Such a federal ROP could provide clear guidelines on how individuals’ likenesses can be used commercially, reducing the confusion that currently arises from the varied protections offered by different states.

Alternatively, a federal ROP could set minimum standards for individuals while still permitting states to implement and enforce stronger measures if they choose to do so. This flexible approach would allow for a baseline of rights that everyone can rely on, while also respecting states’ rights to legislate more stringent protections based on their unique circumstances and values. Either strategy would work to eliminate the inconsistencies, and legal ambiguities present in the current patchwork of state laws regarding the right of publicity.

By carefully balancing an individual’s agency and autonomy in determining the use of their name and likeness with the countervailing interest in protecting freedom of expression, legislators can create a framework that protects individuals harm while preserving the public’s ability to engage in meaningful discourse. A federal ROP could provide the necessary clarity and consistency to safeguard both individual rights and the principles of free speech.

Conclusion

The influence of synthetic media on Tuesday’s election cannot be overstated, as it has already altered public perception significantly and fueled misinformation narratives. As this technology continues to distort reality and misrepresent candidates, the need for effective regulation and enforcement of existing laws becomes increasingly urgent.

Lawmakers and regulators must engage in nuanced dialogue on the implications of synthetic media on the electoral process to establish regulations that balance the protection of free expression with the need to safeguard elections from manipulation. Researchers must continue to study the impacts of synthetic media on the public to inform the dialogue and guide effective policy decisions. While many urgent legal questions remain, one fact is clear: ensuring that synthetic media does not become a tool for deceiving voters is crucial to protecting democracy.

Michelle Xin is a third year J.D. candidate at Temple University Beasley School of Law. Her interests include international trade, regulatory work, and compliance.

This blog is a part of iLIT’s student blog series. Each year, iLIT hosts a summer research assistant program, during which students may author a blog post on a topic of their choosing at the intersection of law, policy, and technology. You can read more student blog posts and other iLIT publications here.

Similar Posts