European Perspectives on AI Medical Devices

12 Mar 2020

The U.S. Food and Drug Administration (FDA) has recently authorized marketing of a software based on Artificial Intelligence (AI) intended to guide medical professionals in capturing quality cardiac ultrasounds used to identify and treat heart diseases. The Software Caption Guidance is based on a machine learning technology that differentiates quality images and non-acceptable images. In addition, it is connected with an AI based interface designed to give described commands to untrained professional about the operation of the ultrasound probe in order to capture relevant images.

Considering that heart diseases are one of the most known causes of death in the world and this technology promotes access to effective cardiac diagnostics by professionals without prior experience with ultrasound technologies, it is a potential lifesaving tool.

Several AI based medical devices has been analysed and approved by the FDA since 2018. New instruments were included in the premarket submission in order to analyse the transparency and accuracy of the respective AI algorithms. This was discussed in the FDA’s paper launched in April, 2019 ““Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) - Discussion Paper and Request for Feedback”. This movement has encouraged even more investments on the sector and has been influencing the European scenario.

The European Commission is working on the development of AI regulation in multi-dimensional perspectives and it seems to have concluded why and how regulate it through the publication of “Ethics Guidelines for Trustworthy AI” in April, 2019 by the European Commission’s High-level Expert Group on Artificial Intelligence. The recommendations are related with the principles of ethics, lawfulness and robustness from a technical and societal perspectives.

Specifically in relation to the health sector, the Regulation EU 2017/745 on Medical Devices (Medical Devices Regulation), which will be fully applicable in May 2020, provides that software programs created with the clear intention to be used for medical purposes are considered medical devices. Therefore, AI based health technologies helping to decide on treatment of diseases through prediction or prognosis usually fall under this definition.

In this regard, while different sectors are pressing for a practical regulation for AI, the European health sector has been mentioned as one possible case in which pre existing regulation, such as the Medical Device Regulation and its certification process, may be enough to keep up with AI based technologies.

Despite the fact that medical devices are regulated by national authorities, the European Medicine Agency (EMA) is the responsible for assessment, authorization and supervision of certain categories in accordance with the European legislation.

Considering the potential of innovative technologies to transform healthcare, including AI based medical devices, as well as the risks it raises, EMA has joined an European task force involving the matter[1] and has launched its main strategic goals[2], including the exploitation of digital technologies and artificial intelligence in decision making. Besides to develop expertise to engage with digital technology, artificial intelligence and cognitive computing, EMA’s idea is to create an AI test laboratory to explore application of AI based technologies which support data driven decisions.

In general, the main European concerns about AI are related with transparency and accountability considering the complexity of the respective algorithms, but specially, the identification of unlawful biases and prejudicial elements. In this regard, health data breaches and AI decision making based on sensitive data such as health data may lead to discrimination and is considered of a huge risk.

In addition, the Medical Devices Regulation mentions requirement such as informed consent, transparency, access to information and provision of accessible and essential information about the device to the patient.  Therefore, its recommendable at least to demonstrate the efforts to overcome the challenges related with AI mentioned above in the submissions for approvals of medical devices to EMA. It can be tackled by presenting predictable and verifiable algorithms, a clear understanding of the categories of data used in the project and the implementation of regular audits and procedures to avoid discrimination, errors and inaccuracies.

In view of the above, EMA seems still be searching an adequate approach to ensure that AI based innovative technologies are effective and appropriate to support medical decisions, as well as to fit AI in the existing regulatory framework in a manner that these technologies are supported by the society. Notwithstanding, EMA has been supporting initiatives to explore AI and already approved investigations researches based on artificial intelligence, such as the pediatric investigation plan for PXT3003 by Pharnext company[3], which demonstrates it is open to discuss AI based projects.

As included in a recent article written by Daniel Walch, director of groupement hospitalier de l’Ouest lémanique (GHOL) and by Xavier Comtesse, head of the first Think Tank in Switzerland and PHD in Computer Science: “Artificial intelligence will not replace doctors. But the doctors who will use AI will replace those who will not do it”. Therefore, it will be key to have a more practical approach in relation to the approval of AI medical devices for the promotion of innovation and trust in the European health sector, specially upon May, 2020, with the full applicability of the Medical Devices Regulation.

[1] HMA-EMA Joint Big Data Taskforce

[2] EMA Regulatory Science to 2025

[3] European Medicines Agency Agrees with Pharnext’s Pediatric Investigation Plan for PXT3003