ArdorComm Media Group

Misinformation

Over Half of Indians Rely on Social Media for News: Reuters Institute Report

A recent Reuters Institute Digital News Report 2024 reveals that over 50% of Indians rely on social media platforms like YouTube and WhatsApp for news. The report highlights a global decline in trust toward mainstream news brands and an increase in news avoidance and feelings of being overwhelmed by the sheer volume of content. The survey, conducted across six continents and 47 markets, notes that while platforms like YouTube (54%) and WhatsApp (48%) are increasingly preferred for news in India, Facebook and X (formerly Twitter) are losing popularity. The report also underscores a worldwide decline in Facebook’s news consumption, dropping by 4% in the past year. The study identifies a global shift in news consumption habits, with younger audiences favoring short video formats. Platforms such as TikTok and YouTube are emerging as key sources, while traditional publishers face challenges in monetization and audience engagement. Two-thirds of respondents globally expressed a preference for short news videos over long formats. The report also sheds light on the growing role of influencers, commentators, and independent creators as trusted sources of information, particularly on platforms like YouTube and TikTok. However, traditional journalists still retain credibility on networks such as Facebook and X. A major concern revealed in the report is the rise in misinformation. Globally, 59% of respondents expressed concerns about distinguishing real from fake news online, with platforms like TikTok and X being flagged for hosting misinformation, including “deep fake” content. Globally, trust in news is alarmingly low, with only 40% of respondents saying they trust the news they consume. In contrast, Finland leads with a 69% trust rate, while Greece and Hungary lag at just 23%. The report also highlights financial challenges for journalism, as fewer people are willing to pay for news. Only 17% of respondents in richer nations reported paying for an online news subscription, with significant discounts influencing those who do. Adding to the pressure on sustainable journalism are technological disruptions, including the growing influence of AI. The report warns that AI tools may flood the media landscape with low-quality, synthetic content, further eroding trust and interest in news. The findings point to a critical juncture for global journalism, with calls for innovation and trust-building amid the shifting dynamics of news consumption. Source: The Wire Photo Credit: The Wire

Survey: Germans See Social Media as the Main Source of Fake News

A recent study by the Bertelsmann Foundation reveals that 81% of Germans view the spread of disinformation as a significant threat to democracy, with social media emerging as the primary source of fake news. The survey highlights widespread concerns about the impact of online misinformation on elections, social cohesion, and contentious topics like migration, health, climate change, and war. Approximately 78% of respondents worry that such falsehoods could influence electoral outcomes and deepen societal divisions. Two-thirds of participants identified active social media users and bloggers as the main culprits behind the spread of misinformation. Additionally, 53% pointed to foreign governments, and half of the respondents even accused the German government of contributing to the problem. Despite these concerns, the study found that 93% of respondents trust the media but believe fake news is deliberately propagated to undermine confidence in politics and democracy. Some social media users have proposed that Germany adopt measures similar to Türkiye’s “Disinformation Combat Center,” a government initiative designed to counteract false information. The rise of misinformation on social media has become a global concern, with many nations grappling with its impact on political polarization and public trust. Germany’s experience underscores the urgent need for robust strategies to combat fake news and safeguard democratic processes. As disinformation continues to grow, Germany and other nations face increasing challenges in balancing free speech with efforts to protect their democracies from the erosion of public trust.  

Bahraich Police Warns Right-Wing Media Against Spreading Fake News Amidst Communal Clashes

ArdorComm news

Following the recent communal clashes in Bahraich, Uttar Pradesh, local police have issued strict warnings against the dissemination of fake news, particularly by certain right-wing media outlets. The violence, sparked by a dispute over loud music during the Durga idol immersion procession on October 13, led to the death of 22-year-old Ram Gopal Mishra and left several others injured. Media outlets, including Aaj Tak and Zee News, have come under fire for allegedly spreading misleading information regarding Mishra’s death. Reports claimed that Mishra was subjected to brutal torture, including electric shocks and mutilation, before his death. Sudhir Chaudhary, a well-known journalist with a history of controversial reporting, echoed these false reports on Aaj Tak, suggesting Mishra’s death was the result of unprecedented violence against Hindus. The Bahraich police were quick to debunk these claims, stating that Mishra had died from gunshot wounds after being shot 20 times during the clashes. A video surfaced showing Mishra storming into a Muslim household and vandalizing the property before being shot. The police confirmed that the cause of death was solely due to bullet injuries, with no evidence of torture or mutilation. In response to the misinformation circulating on social media, the Bahraich police have issued public warnings on their official X-page, urging people not to spread false narratives that could escalate communal tensions. They emphasized that legal action would be taken against those found guilty of disseminating misleading information. The police clarified the situation through a statement: “Misinformation like electrocuting the deceased, killing him with a sword, and pulling out nails was spread on social media to disturb communal harmony. The postmortem clearly shows the cause of death was gunshot wounds. We urge everyone to refrain from spreading rumours and maintain peace.” The clashes and subsequent riots led to the suspension of internet services in Bahraich to prevent further unrest. Over 55 people have been detained, and the situation, while still tense, is gradually returning to normal. BJP MLA Shalabh Mani Tripathi also added fuel to the fire by targeting Muslim journalists in a controversial post, questioning their impartiality and accusing them of protecting rioters. His actions have been criticized for exacerbating communal tensions at a time when efforts are being made to restore peace. Source : Siasat Daily

AI in Government: Navigating the Uncharted Terrain of Deepfakes and Misinformation

Blog on Government

In a landmark move that may reshape the political advertising landscape, the Republican National Committee (RNC) recently unveiled the first national campaign advertisement entirely crafted by artificial intelligence (AI). As President Biden kicked off his re-election campaign, the thirty-second RNC ad depicted a dystopian vision of four more years under his leadership, leveraging AI-generated images. While the RNC openly acknowledged its use of AI, the emergence of such technology in political advertising raises concerns about misinformation and its potential impact on public perception. The integration of AI into political advertising echoes the predictions made by Robert Chesney and Danielle Citron in their 2018 Foreign Affairs article on deepfakes and the new disinformation war. The perfect storm of social media information cascades, declining trust in traditional media, and the increasing believability of deepfakes has created a breeding ground for misinformation. Recent instances, such as a deepfake video falsely portraying President Biden announcing a military draft for Ukraine, highlight the potential dangers of transparently shared AI being misconstrued as authentic information. While Chesney and Citron initially focused on geopolitical threats posed by deepfakes, the technology’s entry into political advertising introduces a new dimension. Past campaigns have witnessed a race to produce provocative ads, with digitally manipulated images becoming a common tool. Notably, the McCain campaign in 2015 utilized manipulated images of Barack Obama, underscoring the evolving landscape of political communication. However, the implications of AI-generated content extend beyond mere political attacks. Vulnerable populations, including women, people of color, and LGBTQI+ individuals, are likely to bear the brunt of these emerging technologies. A Center for Democracy and Technology report on the 2020 congressional election cycle revealed that women of color candidates were twice as likely to face mis- and disinformation campaigns online. The weaponization of deepfake technology in India against female politicians and journalists adds another layer of concern, emphasizing the potential for AI-generated content to be used in ways that undermine credibility and perpetuate harm. The “liars dividend,” as coined by Citron and Chesney, presents another risk. Realistic fake videos and images may provide politicians with an escape route from accountability, allowing them to dismiss problematic content as AI-generated or a deepfake. In an era characterized by negative partisanship, the liar’s dividend could become a potent tool for evading responsibility. As social media platforms grapple with the challenges posed by AI-generated content, there is a pressing need for comprehensive policies. Meta and TikTok have implemented measures to address deepfakes, but integrating these rules with existing political content policies remains a challenge. In response to the RNC ad, Representative Yvette Clark introduced the “REAL Political Advertisements Act,” seeking mandatory disclosures for AI-generated content in political ads. The Biden administration’s recent action plan to promote responsible AI innovation and the Senate Judiciary Privacy, Technology, and the Law Subcommittee’s hearing on AI technology oversight indicate a growing awareness of the need for regulatory measures. With another election cycle underway, the intersection of AI and politics demands urgent attention and thoughtful regulation to safeguard the integrity of political discourse and public trust.