Artificial Intelligence (AI) and machine learning are increasingly being used across healthcare. From diagnostics to targeted treatments, there is emerging evidence of clinical benefit. However, challenges remain, not least the lack of robust governance, regulations, and standards, to ensure applications are safe, effective, and quality assured.
The BSI’s and Association for Advancement of Medical Instrumentation’s (AAMI) publication of The emergence of artificial intelligence and machine learning algorithms in healthcare: Recommendations to support governance and regulation marks significant progress on this front.
The report was commissioned by the UK’s Medicines and Healthcare products Regulatory Agency (MHRA) and includes input from the BSI and AAMI, the US Food and Drug Administration (FDA), and other stakeholders.
Why it matters
AI describes a set of advanced technologies that enable machines to carry out highly complex tasks effectively – tasks that require the equivalent of or more than the intelligence of a person performing the task.
I’ve noted in a previous article on the regulation of medical devices, there are risks associated with the use of AI within the health context. These include:
- After system development, will the system continue to learn and refine its internal model? How do we regulate medical devices that ‘learn’?;
- To what extent is human decision making involved? Does the system make suggestions that we can disagree with, or does the system make decisions on its own?
And at the heart of this is the concern that AI and machine learning can and does get it wrong. Outside the context of health, there are some well known examples of this – Tay, Microsoft’s Twitter bot, that went from being friendly to racist and sexist in less than 24 hours, and the case last year of a woman killed by an experimental Uber self-driving car in the US.
Why AI is different
There is a strong case for the introduction of new standards, regulations, and governance frameworks, for AI in health.
First, AI technologies introduce a level of autonomy. In this there are particular challenges in areas where AI solutions potentially provide unsupervised patient care (p.5), for example with monitoring and adjustments of medications for people with long term health conditions.
Second, outputs can change over time in response to new data as is the case with ‘adaptive’ algorithms. This means there is a real need for effective supervision of continuous learning systems. At the heart of this is the question: how do we regulate devices that learn?
The UK’s National Institute for Clinical Evidence (NICE) published Evidence Standards Framework for Digital Health Technologies in March 2019. These standards differentiate between AI using fixed algorithms i.e. where outputs do not automatically change over time; and, those using adaptive algorithms i.e. where algorithms automatically and continually update over time meaning that outputs will also change.
And the distinction between ‘fixed’ and ‘adaptive’ algorithms is an important one. While the NICE Evidence Standards may be the most appropriate to use in the case of fixed algorithms, for adaptive algorithms, they make clear that separate standards will need to apply.
Important here will be how the principles outlined in the UK government’s Code of conduct for data-driven health and care technology move from principles to real world standards, regulations, and governance. Showing ‘what type of algorithm is being developed or deployed, the ethical examination of how the data is used, how its performance will be validated and how it will be integrated into health and care provision’ will be the key here (Principle 7). I suspect going forward there will be requirements to perform regular audits of the metrics and impacts during the use of algorithms in same use cases. This may also become the case where with ‘fixed’ algorithms if there is any change to context.*
And the third point concerns explainability and understanding of how outputs and decisions have been reached. This is significant. A real challenge with algorithms is that it can be difficult or impossible to understand the underlying logic of outputs. While under GDPR there are restrictions on the use of automated decision making with regards to individuals and profiling, the scope of this is yet to be tested [see Rights related to automatic decision making including profiling].
This point on explainability and understanding is important to both ensure systems are safe and effective, and to ensure public and professional confidence and trust.
The recommendations
The report includes a number of recommendations. These include:
- Create an international task force to provide oversight for AI in healthcare.
- Undertake mapping to review the current standards landscape and identify opportunities.
- Develop a proposal for a terminology and categorization standard for AI in healthcare.
- Develop a proposal for guidance to cover validation processes.
- Create a communications and engagement plan.
All these recommendations make sense. I particularly welcome the comms and engagement plan as one of the key areas of work. This is likely to include a wide range of stakeholders: patients and the public; health and care professionals; policy makers; data scientists and so on. These ongoing conversations will be essential for ensuring confidence and trust in AI systems.
Next Steps
The AI policy and regulatory environment in health is fast moving and complex. Over the next few months, BSI and AAMI intend to publish draft plans for comment on how they intend to implement these recommendations. This is something I very much look forward to reading.
*With thanks to Dr Allison Gardner for clarifying this for me.
Get in touchIf you have a question or if you’re interested in working with me, or would just like a chat, drop me a message via my contact page. |