There is a lot of excitement in healthcare about the use of artificial intelligence (AI) to improve clinical decision making.
AI developed by companies like IBM Watson for Healthcare and DeepMinds Healthcare promises to help specialists diagnose patients more accurately. Two years ago, McKinsey collaborated with the European Union’s EIT Health to produce a report to examine the potential of AI in healthcare. Key opportunities found by the report’s authors included healthcare operations: diagnosis, clinical decision support, triage and diagnosis, nursing, management of chronic care, and self-care.
“First, solutions are likely to address the low-hanging fruits of routine, repetitive, and largely administrative tasks that consume a great deal of physicians’ and nurses’ time, streamline healthcare operations, and increase adoption,” they wrote. “In this first phase, we would also include imaging AI applications that are already being used in specialties such as radiology, pathology and ophthalmology.”
The world of AI in healthcare has not stood still and published the European Parliament in June artificial intelligence in healthcare, with a focus on applications, risks, ethical and societal implications. The authors of the paper recommended that the risk assessment of AI should be domain-specific, since the clinical and ethical risks vary in different medical fields such as radiology or pediatrics.
The paper’s authors wrote: “In the future regulatory framework, the validation of medical AI technologies should be harmonized and strengthened to assess and identify multi-faceted risks and limitations, by not only assessing the accuracy and robustness of the model, but also the algorithmic fairness that the clinical safety, clinical acceptability, transparency and traceability.”
The validation of medical AI technologies is the central research focus of the Erasmus University Medical Center in Rotterdam. Earlier this month, Erasmus MC, University Medical Center Rotterdam, began collaborating with health tech company Qure.ai to launch its AI medical imaging innovation lab.
The initial program will run for three years and will conduct detailed investigations into anomaly detection by AI algorithms for infectious and non-infectious disease states. Researchers hope to understand the potential use cases for AI in Europe and advise clinicians on best practices for adopting the technology specific to their needs.
Jacob Visser, Radiologist, Chief Medical Information Officer (CMIO) and Assistant Professor of Values-Based Imaging at Erasmus MC, said: “It is important to recognize that we have big challenges, an aging population and we have a lot of technology that needs to be used responsibly be used. We are exploring how we can add value to clinicians and patients with emerging technologies and how to measure those advances.”
Visser’s role as CMIO acts as a bridge between the medical side and the technologists. “As a medical professional, the CMIO wants to steer IT in the right direction,” he said. “Clinicians are interested in the possibilities that IT can offer. New technological developments are leading medical professionals to see greater opportunities in areas such as precision medicine.”
Erasmus MC will run the lab and conduct research projects using Qure’s AI technology. The initial research project will focus on musculoskeletal and thoracic imaging. Visser said that when evaluating AI models, “it’s easy to verify that a fracture has been correctly identified.”
This allows an estimation of how well the AI is doing and gives researchers a powerful insight into how often the AI incorrectly misses a real fracture (false negative) or incorrectly classifies an X-ray as a fracture (false positive).
Regarding the level of scrutiny that goes into using AI in healthcare, Vissier said, “Medical algorithms need approval, such as from the Federal Drug Administration [FDA] in the US and achieve CE certification in Europe.”
Speaking of the partnership with Qure.ai, he added, “We see the adoption of AI in healthcare at a critical juncture, with clinicians seeking expert advice on how best to assess the adoption of the technology. In Qure’s work to date, it is clear that they have gathered detailed insights into the effectiveness of AI in healthcare, and together we will be able to assess effective use cases in European clinical settings.”
But there are many challenges in using AI for health diagnostics. Even if an algorithm is FDA cleared or CE marked, that doesn’t necessarily mean it will work in a local clinical practice, Vissier said. “We need to ensure that the AI algorithm meets the needs of our local practice,” he added. “What are the clinically relevant parameters that can be influenced by the results of the AI?”
The challenge is that the data used to develop a healthcare AI algorithm uses a specific data set. As a result, the resulting data model may not be representative of real patient data in the local community. “You see a drop in performance when you validate an algorithm externally,” Vissier said.
This is analogous to pharmaceutical studies, where side effects may vary between populations. The pharmaceutical sector oversees usage, which feeds into the product development cycle.
Regarding his ambitions for the research from the new lab, Vissier said: “I hope that within a year I can prove that the algorithms work, the accuracy of their diagnoses and I hope that we have started to evaluate, how these algorithms work daily clinical practice.”