WHO Europe warns that AI in healthcare requires stronger safeguards

COPENHAGEN: The growing use of artificial intelligence in healthcare requires stronger legal and ethical protections for patients and medical staff, the World Health Organization’s European office warned in a report published on Wednesday.
The findings come from a study on how AI is adopted and regulated in European health systems, based on input from 50 of the 53 states in the WHO European region, which also covers Central Asia.
According to the report, only four countries, or around 8%, have so far implemented a dedicated national AI for health strategy, while seven others are in the process of developing one.
“We find ourselves at a crossroads,” Natasha Azzopardi-Muscat, WHO director of health systems for Europe, said in a statement.
“AI will either be used to improve people’s health and well-being, reduce the burden on our exhausted health workers, and reduce health care costs, or it could compromise patient safety, compromise privacy, and entrench inequities in care,” she said.
Nearly two-thirds of countries in the region are already using AI-assisted diagnostics, including imaging and detection, while half of countries have introduced AI chatbots for patient engagement and support.
The WHO urged its member states to address “potential risks” associated with AI, including “biased or poor-quality results, automation bias, erosion of clinician skills, reduced clinician-patient interaction, and inequitable outcomes for marginalized populations.”
Regulation is struggling to keep pace with technology, WHO Europe said, noting that 86% of member states said legal uncertainty was the main barrier to AI adoption.
“Without clear legal standards, clinicians may be reluctant to rely on AI tools and patients may not have clear avenues for recourse if something goes wrong,” said David Novillo Ortiz, WHO regional advisor for data, artificial intelligence and digital health.
WHO Europe said countries should clarify responsibilities, establish redress mechanisms in the event of harm and ensure that AI systems “are tested for safety, fairness and effectiveness in the real world before reaching patients”.




