The artificial intelligence (AI) healthcare market is predicted to grow from US$11bn in 2021 to US$187bn by 2030, according to Statista. This huge increase will likely result in some significant changes to how healthcare providers, hospitals, pharmaceutical, and biotechnology companies operate.
Elements such as improved machine learning (ML) algorithms, increased access to data, more cost-effective hardware, and the availability of 5G have certainly contributed to the growing application of AI in the healthcare industry, rapidly accelerating the pace of change. A major benefit of both AI and ML technologies is that they can sift through enormous amounts of data, analysing it at a much faster pace than humans.
Although healthcare organisations are using AI to drastically improve efficiency through a number of processes, it is crucial that the technology is being implemented in such a way that puts safeguarding the personal and sensitive data of patients at the forefront.
The safety of AI in healthcare is of utmost importance
The World Health Organization (WHO) has released a new publication on AI for health, emphasising the importance of safety, effectiveness, and communication among stakeholders. With the increasing availability of healthcare data and the rapid progress in analytic techniques, AI has the potential to truly transform the health sector and its outcomes.
However, it seems that AI technologies, including large language models (LLM), are being rapidly deployed, sometimes without a full understanding of how they will work. This can result in either positive or negative effects on users, including healthcare professionals and patients. This is where WHO’s publication aims to help create and maintain strong legal and regulatory frameworks for protecting privacy, security, and integrity when using health data in AI systems.
According to Dr Tedros Adhanom Ghebreyesus, Director General at WHO, “Artificial intelligence holds great promise for health but also comes with serious challenges, including unethical data collection, cybersecurity threats and amplifying biases or misinformation. This new guidance will support countries to regulate AI effectively, to harness its potential, whether in treating cancer or detecting tuberculosis while minimising the risks.”
The areas of regulation of AI for healthcare
Responding to the growing need to responsibly manage the rapid increase of AI health technologies, the publication outlines six areas for regulation.
- In order to create a sense of trust, the publication stresses the importance of transparency and documentation throughout the entire product lifecycle while tracking development processes.
- In terms of risk management, issues such as 'intended use,' 'continuous learning,' human interventions, model training, and cybersecurity threats should be addressed, while simplifying models whenever possible.
- External validation of data and the clear intended use of AI play a vital role in ensuring safety and facilitating regulation.
- A commitment to the quality of data is crucial to preventing systems from amplifying biases and errors.
- Addressing the challenges of regulations such as Europe's General Data Protection Regulation (GDPR) and the United States Health Insurance Portability and Accountability Act (HIPAA) involves a focus on understanding the scope of jurisdiction and consent requirements, with an emphasis on privacy and data protection.
- Promoting collaboration among regulatory bodies, patients, healthcare and industry professionals, industry representatives, and government partners can help guarantee that products and services maintain compliance with regulations throughout their lifecycles.
AI has the potential to revolutionise the healthcare industry in so many ways, however, AI systems are incredibly complex, relying on both the code that they are built on and the data they are trained on. It can be challenging for AI models to accurately represent the diversity of populations, which ultimately leads to biases, inaccuracies, or even failure. Therefore to help address these concerns, better regulations can be used to ensure that attributes such as gender, race and ethnicity, are reported and datasets are intentionally made more representative.
Please also check out our upcoming event - Net Zero LIVE on 6 and 7 March 2024.
BizClik is a global provider of B2B digital media platforms that cover Executive Communities for CEOs, CFOs, CMOs, Sustainability leaders, Procurement & Supply Chain leaders, Technology & AI leaders, Cyber leaders, FinTech & InsurTech leaders as well as covering industries such as Manufacturing, Mining, Energy, EV, Construction, Healthcare and Food.
BizClik – based in London, Dubai, and New York – offers services such as content creation, advertising & sponsorship solutions, webinars & events.