
Deepfake technology is one of the most emerging challenges in modern law. While synthetic media can be used for creative expression, it has also created new forms of harassment and crimes. The main question is whether deepfakes should be regulated, even if rules forbid some satirical or artistic uses. The answer is undeniably yes, given the available facts. Deepfakes inflict so much damage that specific legislation is required. It's crucial to remember, though, that these regulations can be crafted to effectively address harmful applications of deepfakes while reducing their adverse effects on artistic and satirical purposes. Maintaining this equilibrium is essential to preventing excessive limitations on the right to free speech.
The dangers of deepfakes are prevalent in elections. Fake audio and video can be used to mislead voters in the crucial days leading up to an election, when there is little time for fact-checking. In early 2024, a robocall imitating President Biden’s voice was sent to voters in New Hampshire, telling them not to vote in the primary. The Federal Communications Commission responded, ruling that AI-generated voices fall under the Telephone Consumer Protection Act and are unlawful in robocalls. The Commission stated that its decision “makes voice cloning technology used in common robocall scams … illegal.” It was a $6 million fine levied against the individual responsible for the call. This shows that deepfakes are not just a hypothetical risk to democracy, but a fundamental tool for electoral manipulation that warrants a clear legal response.
Looking ahead, the potential for legal solutions to address the issue of deepfakes is promising. Beyond elections, deepfakes have already been used to perpetrate fraud. AI-generated voices and videos have been deployed in financial scams, corporate impersonation, and identity theft, making misrepresentation far more convincing and scalable. While fraud statutes already exist, the unique reach of deepfakes creates enforcement gaps that the law has yet to address fully. However, with the proper legal measures, these gaps can be closed. Developing targeted legislation against malicious deepfakes would not only strengthen existing laws against fraud and impersonation but also preserve space for legitimate artistic and satirical expression.
The most worrying abuse of deepfakes lies in the area of non-consensual sexual imagery. Studies have shown that the majority of deepfake videos available online, up to 96 percent, depict sexual content without the consent of those portrayed. Women, especially public figures, are disproportionately targeted. Victim-support organizations, like the UK’s Revenge Porn Helpline, report a rise in cases involving AI-generated pornography, describing victims as “desperate” when they discover manipulated images of themselves circulating online. Existing non-consensual pornography laws are often not good enough in this context, especially when perpetrators are anonymous.
Although laws targeting political or pornographic deepfakes have been passed in several U.S. states, the problem transcends national boundaries. International collaboration is essential. Bipartisan legislation that made non-consensual sexual deepfakes illegal was passed in Michigan in 2025, opening the door to both criminal and civil lawsuits. On the global level, the European Union’s Artificial Intelligence Act takes a transparency-based approach, requiring that “content … generated or modified with the help of AI … be clearly labelled as AI-generated.” While not a ban, this type of law aims to protect audiences from being targeted without prohibiting all uses of synthetic media. This global perspective is essential in the fight against deepfakes.
The First Amendment protects artistic, satirical, and political expression, and deepfakes can sometimes fall into these categories. Yet the U.S. Constitution also recognizes exceptions where speech causes legal harm. Fraud, defamation, obscenity, and actual threats are all areas where speech can be regulated without violating the First Amendment. As the Supreme Court recognized in United States v. Alvarez, false statements are not inherently unprotected; however, regulation is required when they are tied to specific harms. If laws are narrowly created, focusing on consent and election integrity, they are likely to withstand constitutional challenge.
The harms are widespread. Agencies such as the FCC, state legislatures, and international organizations have already demonstrated that narrow regulations are both feasible and practical. As the FCC emphasized, its ruling “makes voice cloning technology used in common robocall scams … illegal.” This type of targeted intervention is not intended to limit artistic expression; rather, it serves as a necessary means for the truth.
Reagan Schroeder, a student in Jon Pfeiffer’s media law class at Pepperdine University, wrote the above essay in response to the following prompt:
"Should deepfake technology be restricted by law, even at the expense of certain artistic or satirical uses?"
Reagan Schroeder is a Public Relations major at Pepperdine University with a passion for communications, media, and public policy.
Contact Jon and his team today.