ArdorComm Media Group

Responsible AI

IIT Madras and Vidhi Centre Recommend Participatory AI Governance Model for Inclusive Development

A recent study by IIT Madras, in collaboration with the Vidhi Centre for Legal Policy, advocates a participatory governance model for Artificial Intelligence (AI) in India, aiming to set a global standard for inclusive AI development. The report emphasizes that involving diverse stakeholders in AI’s lifecycle enhances accountability, transparency, and fairness in AI systems. The research, spearheaded by IIT Madras’ Centre for Responsible AI (CeRAI) at the Wadhwani School of Data Science and Artificial Intelligence (WSAI), brings together technologists, legal experts, and policy researchers to explore the benefits of a participatory approach. Real-world case studies across sectors reveal how public involvement can yield AI systems that are better aligned with societal values and ethical standards. “The widespread adoption of AI has fundamentally reshaped our public and private sectors,” explained Prof. B. Ravindran, Head of CeRAI, IIT Madras. He highlighted a key finding: those impacted by AI systems often lack a voice in their development. “This study aims to close that gap by recommending participatory approaches that prioritize responsible, human-centric AI development,” he added. The report also provides a practical, sector-agnostic framework for identifying and integrating diverse perspectives throughout the AI development process. Shehnaz Ahmed, Lead for Law and Technology at the Vidhi Centre, noted that while the value of inclusivity in AI is recognized, frameworks for its implementation remain unclear. “Our findings demonstrate how a structured, participatory model can guide ethical development, especially in sensitive applications like facial recognition and healthcare,” she explained. This groundbreaking study suggests that a participatory model not only strengthens public trust but also accelerates AI’s acceptance across sectors by fostering transparency and accountability. With global relevance, the framework aims to support ethical, safe, and equitable AI practices worldwide. Source: Shiksha.com Photo Credit: Shiksha.com

AI in Government: Navigating the Uncharted Terrain of Deepfakes and Misinformation

Blog on Government

In a landmark move that may reshape the political advertising landscape, the Republican National Committee (RNC) recently unveiled the first national campaign advertisement entirely crafted by artificial intelligence (AI). As President Biden kicked off his re-election campaign, the thirty-second RNC ad depicted a dystopian vision of four more years under his leadership, leveraging AI-generated images. While the RNC openly acknowledged its use of AI, the emergence of such technology in political advertising raises concerns about misinformation and its potential impact on public perception. The integration of AI into political advertising echoes the predictions made by Robert Chesney and Danielle Citron in their 2018 Foreign Affairs article on deepfakes and the new disinformation war. The perfect storm of social media information cascades, declining trust in traditional media, and the increasing believability of deepfakes has created a breeding ground for misinformation. Recent instances, such as a deepfake video falsely portraying President Biden announcing a military draft for Ukraine, highlight the potential dangers of transparently shared AI being misconstrued as authentic information. While Chesney and Citron initially focused on geopolitical threats posed by deepfakes, the technology’s entry into political advertising introduces a new dimension. Past campaigns have witnessed a race to produce provocative ads, with digitally manipulated images becoming a common tool. Notably, the McCain campaign in 2015 utilized manipulated images of Barack Obama, underscoring the evolving landscape of political communication. However, the implications of AI-generated content extend beyond mere political attacks. Vulnerable populations, including women, people of color, and LGBTQI+ individuals, are likely to bear the brunt of these emerging technologies. A Center for Democracy and Technology report on the 2020 congressional election cycle revealed that women of color candidates were twice as likely to face mis- and disinformation campaigns online. The weaponization of deepfake technology in India against female politicians and journalists adds another layer of concern, emphasizing the potential for AI-generated content to be used in ways that undermine credibility and perpetuate harm. The “liars dividend,” as coined by Citron and Chesney, presents another risk. Realistic fake videos and images may provide politicians with an escape route from accountability, allowing them to dismiss problematic content as AI-generated or a deepfake. In an era characterized by negative partisanship, the liar’s dividend could become a potent tool for evading responsibility. As social media platforms grapple with the challenges posed by AI-generated content, there is a pressing need for comprehensive policies. Meta and TikTok have implemented measures to address deepfakes, but integrating these rules with existing political content policies remains a challenge. In response to the RNC ad, Representative Yvette Clark introduced the “REAL Political Advertisements Act,” seeking mandatory disclosures for AI-generated content in political ads. The Biden administration’s recent action plan to promote responsible AI innovation and the Senate Judiciary Privacy, Technology, and the Law Subcommittee’s hearing on AI technology oversight indicate a growing awareness of the need for regulatory measures. With another election cycle underway, the intersection of AI and politics demands urgent attention and thoughtful regulation to safeguard the integrity of political discourse and public trust.

IIT Madras and Ericsson Forge Strategic Partnership for Responsible AI Research

The Centre for Responsible AI (CeRAI) at the Indian Institute of Technology Madras (IIT Madras) has entered into a strategic partnership with Ericsson to jointly conduct research in the realm of Responsible AI. Ericsson, as a ‘Platinum Consortium Member,’ has committed to this collaboration for a five-year period. As part of this collaboration, Ericsson Research will actively engage in and support all research endeavours undertaken at CeRAI. To commemorate this partnership, a symposium titled ‘Responsible AI for Networks of the Future’ was organized. Distinguished leaders from both IIT Madras and Ericsson Research came together to deliberate on the latest developments and breakthroughs in the field of Responsible AI. During the symposium, a panel discussion revolving around ‘Responsible AI for Networks of the Future’ took place, marking the official launch of this collaboration. The event also served as a platform to showcase ongoing research projects at the Centre for Responsible AI, including initiatives related to large-language models (LLMs) in healthcare and participatory AI. Artificial Intelligence (AI) research holds immense significance for Ericsson, especially in light of the forthcoming 6G networks, which are poised to be driven by AI algorithms, as stated by IIT Madras. Professor B Ravindran, speaking about the collaborative efforts, stated, “As 5G and 6G networks usher in a new era of connectivity and applications, it becomes imperative to ensure that AI models are not only capable but also transparent and adaptable to the diverse range of applications they will serve.” This collaboration seeks to address these critical aspects in the evolving landscape of AI and telecommunications.