ArdorComm Media Group

News on Health
News on Health

WHO Calls for Regulation of AI in Healthcare Due to Risks, Citing Bias and Privacy Concerns

-By ArdorComm News Network

The World Health Organization (WHO) is calling for the regulation of artificial intelligence (AI) in healthcare due to the associated risks, according to a report. WHO emphasizes the need to establish safety and efficacy in AI tools, make them accessible to those who require them, and encourage communication among AI developers and users. While recognizing AI’s potential to enhance healthcare by strengthening clinical trials, improving diagnosis and treatment, and enhancing healthcare professionals’ knowledge and skills, the report by data and analytics company GlobalData highlights the rapid deployment of AI technologies without a full understanding of their long-term implications, which could pose risks to healthcare professionals and patients.

Alexandra Murdoch, a Senior Analyst at GlobalData, acknowledges the significant benefits of AI in healthcare but also highlights the risks associated with rapid adoption. AI systems in healthcare often have access to personal and medical information, necessitating regulatory frameworks to ensure privacy and security. Other challenges with AI in healthcare include unethical data collection, cybersecurity vulnerabilities, and the reinforcement of biases and dissemination of misinformation. An example of AI biases is found in a Stanford University study, which revealed that some AI chatbots provided inaccurate medical information about people of color.

In this study, nine questions were posed to four AI chatbots, including OpenAI’s ChatGPT and Google’s Bard, and all four chatbots provided inaccurate information related to race and kidney and lung function. The use of such false medical information is a cause for concern, as it could lead to issues like misdiagnoses and improper treatment for patients of color. WHO has identified six areas for regulating AI in healthcare, with a focus on managing the risks associated with AI amplifying biases in training data. These areas for regulation include transparency and documentation, risk management, data validation and clarity of AI’s intended use, a commitment to data quality, privacy and data protection, and the promotion of collaboration.

Alexandra Murdoch hopes that by outlining these regulatory areas, governments and regulatory bodies can develop regulations to safeguard healthcare professionals and patients while fully harnessing the potential of AI in healthcare.

0 0 votes
Article Rating
Subscribe
Notify of
guest

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments