ArdorComm Media Group

Friday, July 18, 2025 3:08 AM

Deepfakes

YouTube to Tighten Monetization Rules Amid Rising Concerns Over AI-Generated ‘Slop’ Content

YouTube is set to roll out stricter guidelines aimed at curbing monetization of inauthentic, repetitive, or mass-produced videos — a move largely prompted by the surge in AI-generated content flooding the platform. Effective July 15, YouTube will update its YouTube Partner Program (YPP) monetization policies, offering clearer definitions of what qualifies as “authentic” content and what doesn’t. While the precise wording of the policy update has yet to be published, YouTube’s Help Center now emphasizes that the platform has always required creators to post original content to be eligible for earnings. The upcoming changes, according to YouTube’s Head of Editorial & Creator Liaison Rene Ritchie, are intended to clarify rather than overhaul policy. In a recent video message, Ritchie reassured creators that widely accepted formats such as reaction videos or content that includes reused clips won’t be affected. He called the update a “minor” revision to existing rules, specifically targeting mass-produced, low-value content that viewers often flag as spam. However, the broader context paints a more urgent picture. As generative AI tools become more accessible, platforms like YouTube have seen an influx of low-effort, AI-generated videos. These range from automated voiceovers layered on stock images or video snippets, to full AI-generated true crime series and even fabricated news updates that have gained millions of views. Some AI-based music channels boast massive followings, despite questions about originality and authenticity. One notable example includes an AI-generated crime series that went viral, later revealed by 404 Media to be entirely machine-made. Even YouTube CEO Neal Mohan was recently featured in a deepfake scam, underlining how pervasive — and potentially harmful — this technology has become. Though YouTube insists the July update is a clarification rather than a policy shift, the underlying motive is clear: prevent the platform from being overrun by AI-created “slop” that could undermine its integrity and trust with viewers. By implementing these revised guidelines, YouTube aims to draw a firm line against inauthentic content, making it easier to deny monetization and, if necessary, remove offending creators from the Partner Program altogether. As AI tools continue to evolve, platforms like YouTube are being forced to adapt quickly — ensuring that content quality and originality remain at the core of their ecosystems. Source: TechCrunch

YouTube to Tighten Monetization Rules Amid Rising Concerns Over AI-Generated ‘Slop’ Content Read More »

AI in Government: Navigating the Uncharted Terrain of Deepfakes and Misinformation

Blog on Government

In a landmark move that may reshape the political advertising landscape, the Republican National Committee (RNC) recently unveiled the first national campaign advertisement entirely crafted by artificial intelligence (AI). As President Biden kicked off his re-election campaign, the thirty-second RNC ad depicted a dystopian vision of four more years under his leadership, leveraging AI-generated images. While the RNC openly acknowledged its use of AI, the emergence of such technology in political advertising raises concerns about misinformation and its potential impact on public perception. The integration of AI into political advertising echoes the predictions made by Robert Chesney and Danielle Citron in their 2018 Foreign Affairs article on deepfakes and the new disinformation war. The perfect storm of social media information cascades, declining trust in traditional media, and the increasing believability of deepfakes has created a breeding ground for misinformation. Recent instances, such as a deepfake video falsely portraying President Biden announcing a military draft for Ukraine, highlight the potential dangers of transparently shared AI being misconstrued as authentic information. While Chesney and Citron initially focused on geopolitical threats posed by deepfakes, the technology’s entry into political advertising introduces a new dimension. Past campaigns have witnessed a race to produce provocative ads, with digitally manipulated images becoming a common tool. Notably, the McCain campaign in 2015 utilized manipulated images of Barack Obama, underscoring the evolving landscape of political communication. However, the implications of AI-generated content extend beyond mere political attacks. Vulnerable populations, including women, people of color, and LGBTQI+ individuals, are likely to bear the brunt of these emerging technologies. A Center for Democracy and Technology report on the 2020 congressional election cycle revealed that women of color candidates were twice as likely to face mis- and disinformation campaigns online. The weaponization of deepfake technology in India against female politicians and journalists adds another layer of concern, emphasizing the potential for AI-generated content to be used in ways that undermine credibility and perpetuate harm. The “liars dividend,” as coined by Citron and Chesney, presents another risk. Realistic fake videos and images may provide politicians with an escape route from accountability, allowing them to dismiss problematic content as AI-generated or a deepfake. In an era characterized by negative partisanship, the liar’s dividend could become a potent tool for evading responsibility. As social media platforms grapple with the challenges posed by AI-generated content, there is a pressing need for comprehensive policies. Meta and TikTok have implemented measures to address deepfakes, but integrating these rules with existing political content policies remains a challenge. In response to the RNC ad, Representative Yvette Clark introduced the “REAL Political Advertisements Act,” seeking mandatory disclosures for AI-generated content in political ads. The Biden administration’s recent action plan to promote responsible AI innovation and the Senate Judiciary Privacy, Technology, and the Law Subcommittee’s hearing on AI technology oversight indicate a growing awareness of the need for regulatory measures. With another election cycle underway, the intersection of AI and politics demands urgent attention and thoughtful regulation to safeguard the integrity of political discourse and public trust.

AI in Government: Navigating the Uncharted Terrain of Deepfakes and Misinformation Read More »