-By ArdorComm Media Bureau
January 5, 2024
In a landmark move that may reshape the political advertising landscape, the Republican National Committee (RNC) recently unveiled the first national campaign advertisement entirely crafted by artificial intelligence (AI). As President Biden kicked off his re-election campaign, the thirty-second RNC ad depicted a dystopian vision of four more years under his leadership, leveraging AI-generated images. While the RNC openly acknowledged its use of AI, the emergence of such technology in political advertising raises concerns about misinformation and its potential impact on public perception.
The integration of AI into political advertising echoes the predictions made by Robert Chesney and Danielle Citron in their 2018 Foreign Affairs article on deepfakes and the new disinformation war. The perfect storm of social media information cascades, declining trust in traditional media, and the increasing believability of deepfakes has created a breeding ground for misinformation. Recent instances, such as a deepfake video falsely portraying President Biden announcing a military draft for Ukraine, highlight the potential dangers of transparently shared AI being misconstrued as authentic information.
While Chesney and Citron initially focused on geopolitical threats posed by deepfakes, the technology’s entry into political advertising introduces a new dimension. Past campaigns have witnessed a race to produce provocative ads, with digitally manipulated images becoming a common tool. Notably, the McCain campaign in 2015 utilized manipulated images of Barack Obama, underscoring the evolving landscape of political communication.
However, the implications of AI-generated content extend beyond mere political attacks. Vulnerable populations, including women, people of color, and LGBTQI+ individuals, are likely to bear the brunt of these emerging technologies. A Center for Democracy and Technology report on the 2020 congressional election cycle revealed that women of color candidates were twice as likely to face mis- and disinformation campaigns online. The weaponization of deepfake technology in India against female politicians and journalists adds another layer of concern, emphasizing the potential for AI-generated content to be used in ways that undermine credibility and perpetuate harm.
The “liars dividend,” as coined by Citron and Chesney, presents another risk. Realistic fake videos and images may provide politicians with an escape route from accountability, allowing them to dismiss problematic content as AI-generated or a deepfake. In an era characterized by negative partisanship, the liar’s dividend could become a potent tool for evading responsibility.
As social media platforms grapple with the challenges posed by AI-generated content, there is a pressing need for comprehensive policies. Meta and TikTok have implemented measures to address deepfakes, but integrating these rules with existing political content policies remains a challenge. In response to the RNC ad, Representative Yvette Clark introduced the “REAL Political Advertisements Act,” seeking mandatory disclosures for AI-generated content in political ads.
The Biden administration’s recent action plan to promote responsible AI innovation and the Senate Judiciary Privacy, Technology, and the Law Subcommittee’s hearing on AI technology oversight indicate a growing awareness of the need for regulatory measures. With another election cycle underway, the intersection of AI and politics demands urgent attention and thoughtful regulation to safeguard the integrity of political discourse and public trust.