ArdorComm Media Group

AI ethics

Ashwini Vaishnaw Identifies Four Key Challenges for News Media in the Digital Era

Union Minister for Information and Broadcasting, Ashwini Vaishnaw, highlighted four significant challenges confronting news media amidst the rapidly evolving media landscape. Speaking at a National Press Day event organized by the Press Council of India in Delhi, he outlined concerns related to fake news, algorithmic bias, fair compensation, and the impact of AI on intellectual property rights. Fake News and Disinformation Vaishnaw emphasized the pervasive threat posed by fake news, stating: “The rapid spread of fake news undermines trust, endangers democracy, and raises questions about accountability for content on digital platforms.” He urged society to address the lack of verification and responsibility on these platforms, pointing out their role in exacerbating social tensions globally. Fair Compensation for Conventional Media The shift in news consumption from traditional to digital media has created financial strain for conventional outlets. Vaishnaw highlighted the asymmetric power dynamics between content creators and digital platforms, advocating for fair compensation: “Traditional media invests significant time and resources in creating verified content. This effort must be suitably compensated to preserve journalistic integrity.” Algorithmic Bias The Minister flagged the issue of algorithmic manipulation by digital platforms, designed to maximize engagement rather than prioritize factual accuracy. “Algorithmic bias can incite strong reactions and misinformation, with severe societal consequences, especially in a diverse country like India,” he warned. Vaishnaw urged platforms to develop solutions that mitigate their systems’ adverse impacts. Impact of AI on Intellectual Property Rights Vaishnaw raised ethical and economic concerns over the use of AI models trained on content from creators without proper acknowledgment or compensation. “AI-generated content is derived from vast databases of music, writing, and art, yet original creators often go uncredited and uncompensated. This is not just an economic issue but an ethical one,” he remarked. Vaishnaw stressed the need for open debates and societal consensus to address these challenges: “As pioneers in technology, we must rise above politics, engage in meaningful discussions, and develop solutions to protect the fabric of our society.” These issues, he warned, will only grow in prominence, necessitating proactive measures to safeguard democratic values and journalistic integrity in the digital age. Source: Indiatvnews Photo Credit: Indiatvnews

IIT Madras and Vidhi Centre Recommend Participatory AI Governance Model for Inclusive Development

A recent study by IIT Madras, in collaboration with the Vidhi Centre for Legal Policy, advocates a participatory governance model for Artificial Intelligence (AI) in India, aiming to set a global standard for inclusive AI development. The report emphasizes that involving diverse stakeholders in AI’s lifecycle enhances accountability, transparency, and fairness in AI systems. The research, spearheaded by IIT Madras’ Centre for Responsible AI (CeRAI) at the Wadhwani School of Data Science and Artificial Intelligence (WSAI), brings together technologists, legal experts, and policy researchers to explore the benefits of a participatory approach. Real-world case studies across sectors reveal how public involvement can yield AI systems that are better aligned with societal values and ethical standards. “The widespread adoption of AI has fundamentally reshaped our public and private sectors,” explained Prof. B. Ravindran, Head of CeRAI, IIT Madras. He highlighted a key finding: those impacted by AI systems often lack a voice in their development. “This study aims to close that gap by recommending participatory approaches that prioritize responsible, human-centric AI development,” he added. The report also provides a practical, sector-agnostic framework for identifying and integrating diverse perspectives throughout the AI development process. Shehnaz Ahmed, Lead for Law and Technology at the Vidhi Centre, noted that while the value of inclusivity in AI is recognized, frameworks for its implementation remain unclear. “Our findings demonstrate how a structured, participatory model can guide ethical development, especially in sensitive applications like facial recognition and healthcare,” she explained. This groundbreaking study suggests that a participatory model not only strengthens public trust but also accelerates AI’s acceptance across sectors by fostering transparency and accountability. With global relevance, the framework aims to support ethical, safe, and equitable AI practices worldwide. Source: Shiksha.com Photo Credit: Shiksha.com