ArdorComm Media Group

Sunday, November 16, 2025 10:43 PM

AI regulation

India Introduces AI Governance Guidelines to Ensure Safe and Responsible Adoption

The Indian government has unveiled its first set of Artificial Intelligence (AI) governance guidelines, outlining a framework for the safe, transparent, and ethical use of AI technologies. The non-binding rules, released on Wednesday, are expected to shape India’s long-term vision for AI regulation ahead of the IndiaAI Impact Summit scheduled for February next year. Developed under the Ministry of Electronics and Information Technology (MeitY), the guidelines recommend potential amendments to the Information Technology (IT) Act to better classify AI systems and define liability across the AI value chain. The document highlights that the current definition of “intermediary” under the IT Act — covering telecom operators, search engines, and even cyber cafés — is outdated in the context of autonomous AI systems capable of generating data independently. Principal Scientific Adviser Ajay Sood noted that the new framework aims to provide clarity on responsibilities of AI developers and deployers, while also ensuring accountability. He added that the framework could serve as a model for AI governance in the Global South, especially for countries with limited regulatory resources. The guidelines also propose an India-specific AI risk assessment framework based on real-world harm evidence, along with a national database of AI incidents to track misuse, bias, and potential threats. This centralised repository will collect data from smaller regional databases managed by sectoral regulators, helping policymakers better understand the societal and security implications of AI technologies. The framework further recommends establishing new institutions to oversee AI policy, including an AI Governance Group—a permanent inter-ministerial body responsible for coordination and policy development—and leveraging the newly formed AI Safety Institute as the lead authority for ensuring safe and trusted AI use in India. Other key proposals include adopting regulatory sandboxes to allow innovation in controlled environments with limited legal exposure, and mandating accessible grievance redressal mechanisms through the existing Grievance Appellate Committee process. The guidelines also stress the need to update copyright laws to support large-scale AI model training and clarify digital platform classifications. MeitY Secretary S. Krishnan said the government is committed to acting when necessary to ensure AI is developed responsibly and ethically. The document, shaped after studying AI policies in the US, European Union, and China, and informed by over 2,500 submissions from stakeholders including academia, industry, and government bodies, marks a significant step in India’s effort to build a robust governance ecosystem for emerging technologies. Source: Economic Times

India Introduces AI Governance Guidelines to Ensure Safe and Responsible Adoption Read More »

UN Unveils Key Recommendations for Global AI Governance

The United Nations (UN) has released a landmark report proposing a global framework to address the risks and governance gaps associated with artificial intelligence (AI). Titled “Governing AI for Humanity,” the report outlines seven key recommendations to ensure that AI development aligns with human rights, ethical principles, and sustainable development goals. The report, prepared by a 39-member UN advisory body established last year, highlights the need for a multi-stakeholder approach, urging governments, private companies, civil society, and international organizations to collaborate on AI governance. These recommendations will be discussed at an upcoming UN summit later this month. In a video statement accompanying the report’s release, UN Secretary-General Antonio Guterres emphasized the importance of the document, calling it a “key milestone” in the UN’s ongoing efforts to ensure that AI serves the common good and benefits all of humanity. Among the proposals, the report calls for the creation of a global AI governance system that is inclusive, transparent, and accountable. It advocates for an international AI standards exchange and a global AI capacity development network to strengthen governance capabilities. The report also stresses the need to address AI-related risks such as bias, privacy violations, and job displacement. One notable recommendation is the establishment of a global AI fund to close gaps in governance capacity and collaboration. Additionally, the UN proposes forming a global AI data framework to enhance transparency and accountability in AI systems. The report also warns of the concentration of AI development in a few multinational companies, which could lead to the technology being imposed on populations without proper input or oversight. To support these governance efforts, the UN proposes the creation of a small AI office to coordinate and implement these recommendations. As AI continues to rapidly evolve, the UN’s report aims to ensure that it remains a force for good, aligning with global standards and benefiting all sectors of society. Source: CGTN

UN Unveils Key Recommendations for Global AI Governance Read More »

WHO Calls for Regulation of AI in Healthcare Due to Risks, Citing Bias and Privacy Concerns

News on Health

The World Health Organization (WHO) is calling for the regulation of artificial intelligence (AI) in healthcare due to the associated risks, according to a report. WHO emphasizes the need to establish safety and efficacy in AI tools, make them accessible to those who require them, and encourage communication among AI developers and users. While recognizing AI’s potential to enhance healthcare by strengthening clinical trials, improving diagnosis and treatment, and enhancing healthcare professionals’ knowledge and skills, the report by data and analytics company GlobalData highlights the rapid deployment of AI technologies without a full understanding of their long-term implications, which could pose risks to healthcare professionals and patients. Alexandra Murdoch, a Senior Analyst at GlobalData, acknowledges the significant benefits of AI in healthcare but also highlights the risks associated with rapid adoption. AI systems in healthcare often have access to personal and medical information, necessitating regulatory frameworks to ensure privacy and security. Other challenges with AI in healthcare include unethical data collection, cybersecurity vulnerabilities, and the reinforcement of biases and dissemination of misinformation. An example of AI biases is found in a Stanford University study, which revealed that some AI chatbots provided inaccurate medical information about people of color. In this study, nine questions were posed to four AI chatbots, including OpenAI’s ChatGPT and Google’s Bard, and all four chatbots provided inaccurate information related to race and kidney and lung function. The use of such false medical information is a cause for concern, as it could lead to issues like misdiagnoses and improper treatment for patients of color. WHO has identified six areas for regulating AI in healthcare, with a focus on managing the risks associated with AI amplifying biases in training data. These areas for regulation include transparency and documentation, risk management, data validation and clarity of AI’s intended use, a commitment to data quality, privacy and data protection, and the promotion of collaboration. Alexandra Murdoch hopes that by outlining these regulatory areas, governments and regulatory bodies can develop regulations to safeguard healthcare professionals and patients while fully harnessing the potential of AI in healthcare.

WHO Calls for Regulation of AI in Healthcare Due to Risks, Citing Bias and Privacy Concerns Read More »