AI Act and Pharma / Health

08 Aug 2022

The European Commission has proposed a legal framework on Artificial Intelligence (‘AI’). This AI Act proposes a risk based approach with clear requirements and obligations regarding specific uses of AI. More specifically, the AI Act defines 4 levels of risk in AI: (i) Unacceptable risk, (ii) High risk, (iii) Limited risk and (iv) Minimal / No risk. All AI systems need to be categorized, based on a self-assessment, in one of these levels.

Most of the AI systems used in Pharma

will probably either fall within level III or IV. If the AI system forms a component of a product (e.g. the Central Nervous System App in the ICMRA report[1]) it will probably be considered a high risk level AI System. As a result, these AI systems will be subject to strict obligations. For example:

  • Adequate risk assessment and mitigation systems;
  • High quality of the datasets feeding the system to minimize risks and discriminatory outcomes;
  • Logging of activity to ensure traceability of results;
  • Detailed documentation providing all information necessary to the system and its purpose for authorities to assess its compliance;
  • Clear and adequate information to the user;
  • Appropriate human oversight measures to minimize risk;
  • High level of robustness, security and accuracy.

Moreover, as key risks for AI related to health are mentioned[2]:

    • Bias in training data may lead to discrimination and individual injury/death (i.e., racial bias may lead to incorrect diagnoses) and deepen existing socio-economic inequalities;
    • Technical system errors in AI could lead to mass patient injuries because of widespread use; Increased use and sharing of health data threatens privacy and data protection rights of patients;
    • Lack of transparency and explainability threatens patients’ rights to information and to informed consent to medical treatment;
    • Issues with cybersecurity threaten patients’ health in the case of cyberattacks on for example insulin pumps and pacemakers.

     

It is therefore suggested to classify all health-related AI systems as ‘high risk’ as referred to in Annex III of the AI Act (i.e. public health, pharmaceuticals and wellbeing).

AI regulatory sandbox

The proposal for the AI Act also introduces the instrument of the AI Regulatory Sandbox. This is “a controlled environment that facilitates the development, testing and validation of innovative AI systems for a limited time before their placement on the market or putting into service”.

The benefits of a regulatory sandbox in pharma are:

  • Early involvement of the regulator (aim of mutual learning and adaptation)
  • Faster adaptation of current regulations to suit the new product or service.

Contact

For more information in relation to (legal aspects of) AI, please contact Jos van der Wijst.

[1] ICMRA, Informal Innovation Network, Horizon Scanning Assessment Report – Artificial Intelligence, 6 August 2021

[2] https://haiweb.org/prioritise-health-in-the-artificial-intelligence-act