ArdorComm Media Group

AI regulation

UN Unveils Key Recommendations for Global AI Governance

The United Nations (UN) has released a landmark report proposing a global framework to address the risks and governance gaps associated with artificial intelligence (AI). Titled “Governing AI for Humanity,” the report outlines seven key recommendations to ensure that AI development aligns with human rights, ethical principles, and sustainable development goals. The report, prepared by a 39-member UN advisory body established last year, highlights the need for a multi-stakeholder approach, urging governments, private companies, civil society, and international organizations to collaborate on AI governance. These recommendations will be discussed at an upcoming UN summit later this month. In a video statement accompanying the report’s release, UN Secretary-General Antonio Guterres emphasized the importance of the document, calling it a “key milestone” in the UN’s ongoing efforts to ensure that AI serves the common good and benefits all of humanity. Among the proposals, the report calls for the creation of a global AI governance system that is inclusive, transparent, and accountable. It advocates for an international AI standards exchange and a global AI capacity development network to strengthen governance capabilities. The report also stresses the need to address AI-related risks such as bias, privacy violations, and job displacement. One notable recommendation is the establishment of a global AI fund to close gaps in governance capacity and collaboration. Additionally, the UN proposes forming a global AI data framework to enhance transparency and accountability in AI systems. The report also warns of the concentration of AI development in a few multinational companies, which could lead to the technology being imposed on populations without proper input or oversight. To support these governance efforts, the UN proposes the creation of a small AI office to coordinate and implement these recommendations. As AI continues to rapidly evolve, the UN’s report aims to ensure that it remains a force for good, aligning with global standards and benefiting all sectors of society. Source: CGTN

WHO Calls for Regulation of AI in Healthcare Due to Risks, Citing Bias and Privacy Concerns

News on Health

The World Health Organization (WHO) is calling for the regulation of artificial intelligence (AI) in healthcare due to the associated risks, according to a report. WHO emphasizes the need to establish safety and efficacy in AI tools, make them accessible to those who require them, and encourage communication among AI developers and users. While recognizing AI’s potential to enhance healthcare by strengthening clinical trials, improving diagnosis and treatment, and enhancing healthcare professionals’ knowledge and skills, the report by data and analytics company GlobalData highlights the rapid deployment of AI technologies without a full understanding of their long-term implications, which could pose risks to healthcare professionals and patients. Alexandra Murdoch, a Senior Analyst at GlobalData, acknowledges the significant benefits of AI in healthcare but also highlights the risks associated with rapid adoption. AI systems in healthcare often have access to personal and medical information, necessitating regulatory frameworks to ensure privacy and security. Other challenges with AI in healthcare include unethical data collection, cybersecurity vulnerabilities, and the reinforcement of biases and dissemination of misinformation. An example of AI biases is found in a Stanford University study, which revealed that some AI chatbots provided inaccurate medical information about people of color. In this study, nine questions were posed to four AI chatbots, including OpenAI’s ChatGPT and Google’s Bard, and all four chatbots provided inaccurate information related to race and kidney and lung function. The use of such false medical information is a cause for concern, as it could lead to issues like misdiagnoses and improper treatment for patients of color. WHO has identified six areas for regulating AI in healthcare, with a focus on managing the risks associated with AI amplifying biases in training data. These areas for regulation include transparency and documentation, risk management, data validation and clarity of AI’s intended use, a commitment to data quality, privacy and data protection, and the promotion of collaboration. Alexandra Murdoch hopes that by outlining these regulatory areas, governments and regulatory bodies can develop regulations to safeguard healthcare professionals and patients while fully harnessing the potential of AI in healthcare.