Wearable technologies, Real Time Health Systems for monitoring and diagnostics, apps to help with weight loss, insomnia, mental health, the data landscape of health is changing and it’s changing fast. But where do we sit on the regulation of medical devices in this landscape?

We’re beginning to see the shift from generic reactive care models to models that are increasingly personalised and proactive. But technologies are never neutral, they bring with them opportunities, risks, and unintended consequences.

Don’t get me wrong. I think the possibility of these technologies, from real time monitoring of health conditions to the improved management of patient flows and staffing needs, and many other applications, serve to benefit individuals, their families, and communities. But I’m left in a state of mild disquiet when it comes to issues of privacy, data sharing where that data may be used for purposes to which I did not consent, of possible unintended consequences.

So I took the opportunity to see Pat Baird, Regulatory Head of Global Software Standards at Philips, speak at Farr Institute of Health Informatics Research, University College London. His focus is on getting the system right for handling all the data – from Big Data, AI, and the Internet of Things.

 

Healthcare is changing

There is no denying that technology is radically changing healthcare.

Electronic Healthcare Records have provided great insights into an individual’s health – heart rate, blood pressure, glucose levels, but they only provide snapshots. These systems are increasingly being replaced by Real Time Health Systems. The promise of RTHS is that predictive analytics supply the streaming of data in real time with the potential of better and faster diagnostics and treatment.

And digital therapeutics are increasingly coming into use to help support individuals with anything from weight loss to insomnia to mental health.

For Pat, this is where AI and Machine Learning come in as they will support the management of all this data. But there’s a clear acknowledgement that with it comes risks:

  • After system development, will the system continue to learn and refine its internal model? If so, will we have an understanding of what’s going on?
  • To what extend is human decision making involved? Does the system make suggestions that we can disagree with, or does the system make decisions on its own?

We can pay a high price if we let technology lead and us follow. And our reliance on technology is already profound. During the LA wildfires, police were urging those trying to get out of the way of fires not to use their navigation apps. Why? Well, some of those roads had no traffic for good reason, they were on fire.

 

On mistaking turtles for rifles

And AI and machine learning can and does get it wrong. It remains unclear as to what this might mean for own health provision and health systems as whole.

Here are just a few examples:

These risks, and what they potentially mean for our future, cannot be underestimated. They are suggestive of a deeper question on the boundaries between human and machine decision making, of accountability, transparency, and explanability.

 

The rules don’t work: the regulation of medical devices

Data management and regulations are not fit for purpose, that much is clear. There is a significant amount of work going on in developing sets of principles which could form the basis for a shared ethical framework. These could potentially go on to be used in developing regulations (if needed) which would need to be sector specific or topic specific. What would be needed for the financial sector for instance will be different to any possible regulations in healthcare, though there may be some underlying themes such as data protection and sharing.

For Pat, the current frameworks are not adequate for regulating new types of medical devices. And there are lots of discussions happening in the space. The two key players in the regulation of medical devices in the UK and US are the Medicines and Healthcare products Regulatory Agency (MHRA) and the Food and Drug Administration (FDA). They, in conjunction with other organisations, are collaborating on developing medical device standards for machine learning and AI.

The ambition of this work is to build a UK-US consensus position on considering the particular challenges for algorithms in healthcare. And, as would be expected, discussions are also taking place with other standards’ agencies around the world.

The key discussion point is on what would make a good regulatory environment. This will pick up on issues such as data protection and accountability. They will be putting out a position paper by the end of the year and it’s one I look forward to reading.

 

 

Get in touch

If you have a question or if you’re interested in working with me, or would just like a chat, drop me a message via my contact page.