In the midst of a rapid growth of the implementation of AI, AI has enabled the creation of highly accurate deepfake images and even videos that could reflect real humans and celebrities with extraordinary accuracy. While this technology can be used in a humoristic, artistic, and creative context, there are a lot of risks that one must consider related to privacy and security for one's rights. As result, debates have merged about whether deepfake should be legally restricted, even if these regulations are used against social media users who produce deepfake images and videos for content. Deepfake technology should be restricted legally because of the potential harm from identity theft to political misinformation, outweighing the artistic benefits that it could offer to the public. By focusing its harms on individuals, threats to democracy, and the limits on free speech, it became clear that a regulation is necessary for these purposes.
To begin with, deepfake technology upholds a high-risk for an individual's reputation and privacy. Scholars have recalled that deepfakes are increasingly being used for hostile purposes. Although deepfake can be used for creative comedy through social media, deepfake also promotes non-consensual pornography, identity theft, and harassment. According to a report by Deeptrace Labs, The State of Deepfakes, 96% of deepfakes are used for pornographic and other similar forms of non-pornographic content. These videos have devastating consequences, especially when they leave victims vulnerable to damages on one's reputation, cyberbullying, and psychological harm. Compared to traditional parody, these uses exploit one's persona without authorization. Previous cases reinforced these needs for protection. In the Zacchini v. Scripps-Howard Broadcasting Co. (1977) case, the U.S. The Supreme Court upheld a performer's right to control the use of his own performance, recognizing that individuals have a protectable interest in their image and labor. Deepfakes that replicate a person's face or voice without consent, particularly for exploitative purposes, represent the same violation of autonomy and should be subject to these similar legal restrictions.
Additionally, deepfake could also threaten democratic institutions and public trust. In a time where mis information spreads rapidly online, deepfakes are starting to create danger by crossing the line of facts and false information. A 2020 Brookings Institution report warned that deepfakes could “erode the evidentiary value of a video and audio” by making it difficult for citizens to distinguish between what is real and what is not. For example, consider a scenario where a fabricated video of Trump or Biden making controversial statements is released before their campaign. Not only does this practice spread misinformation, but it also could trigger and change voter opinions and diminish faith within democracy in the U.S. The consequences could also extend beyond politics such as spreading misinformation on financial markets, Fed rate cuts, and even other public officials that could cause immediate negative consequences. These potential harms are far too risky to be unauthorized and exceed the benefits of having deepfakes for artistic uses. For this reason, legal restrictions on deepfakes would protect democratic institutions and even citizens through preserving the reliability of public media
Finally, although deepfakes could be defended on the grounds of free speech, the U.S. law has historically allowed limits when personal expression directly harms others. The First Amendment is an accurate representation of a right that does not protect fraud, defamation, incitement, and deepfakes. For example, a defame video used to manipulate someone's political campaign could fall under the defamation law, while media that is designed to manipulate the stock market could be considered fraud. Courts have recognized that not all forms of expression need absolute protection. In the Hustler Magazine, Inc v Falwell (1988), the Supreme Court had upheld the right to parody, but the decision assumed that parody would not be reasonably interpreted as fact. Deepfakes are designed to look almost real, making them dangerous to the human eye. As a legal scholar Danielle Citron notes, “law should focus on the harmful uses of deepfakes, not their creative potential,” ensuring that restrictions target the abuse of deepfakes and rather than artistic expression. This framework had allowed for a balance: safeguarding individuals, while leaving room for creative art that is protected under clearly defined circumstances.
In conclusion, while deepfake technology can be used for creative outlets and humor, the risks outweigh these benefits. Risks such as identity, public trust and integrity are all values that demand legal protection. Through analyzing previous legal cases and the use of ethical reasoning, it is clear that potential harms of deepfakes could harm various aspects of human intent. Through adopting these restrictions from deepfakes, society could start utilizing AI in a morally sound manner while defending the rights of U.S citizens.
Andrew Kim
Integrated Marketing Communications Major
Contact Jon and his team today.