AI Act and Pharma / Health

08 Aug 2022

The European Commission has proposed a legal framework on Artificial Intelligence (‘AI’). This AI Act proposes a risk based approach with clear requirements and obligations regarding specific uses of AI. More specifically, the AI Act defines 4 levels of risk in AI: (i) Unacceptable risk, (ii) High risk, (iii) Limited risk and (iv) Minimal / No risk. All AI systems need to be categorized, based on a self-assessment, in one of these levels.

Most of the AI systems used in Pharma

will probably either fall within level III or IV. If the AI system forms a component of a product (e.g. the Central Nervous System App in the ICMRA report[1]) it will probably be considered a high risk level AI System. As a result, these AI systems will be subject to strict obligations. For example:

  • Adequate risk assessment and mitigation systems;
  • High quality of the datasets feeding the system to minimize risks and discriminatory outcomes;
  • Logging of activity to ensure traceability of results;
  • Detailed documentation providing all information necessary to the system and its purpose for authorities to assess its compliance;
  • Clear and adequate information to the user;
  • Appropriate human oversight measures to minimize risk;
  • High level of robustness, security and accuracy.

Moreover, as key risks for AI related to health are mentioned[2]:

    • Bias in training data may lead to discrimination and individual injury/death (i.e., racial bias may lead to incorrect diagnoses) and deepen existing socio-economic inequalities;
    • Technical system errors in AI could lead to mass patient injuries because of widespread use; Increased use and sharing of health data threatens privacy and data protection rights of patients;
    • Lack of transparency and explainability threatens patients’ rights to information and to informed consent to medical treatment;
    • Issues with cybersecurity threaten patients’ health in the case of cyberattacks on for example insulin pumps and pacemakers.

     

It is therefore suggested to classify all health-related AI systems as ‘high risk’ as referred to in Annex III of the AI Act (i.e. public health, pharmaceuticals and wellbeing).

AI regulatory sandbox

The proposal for the AI Act also introduces the instrument of the AI Regulatory Sandbox. This is “a controlled environment that facilitates the development, testing and validation of innovative AI systems for a limited time before their placement on the market or putting into service”.

The benefits of a regulatory sandbox in pharma are:

  • Early involvement of the regulator (aim of mutual learning and adaptation)
  • Faster adaptation of current regulations to suit the new product or service.

Contact

For more information in relation to (legal aspects of) AI, please contact Jos van der Wijst.

[1] ICMRA, Informal Innovation Network, Horizon Scanning Assessment Report – Artificial Intelligence, 6 August 2021

[2] https://haiweb.org/prioritise-health-in-the-artificial-intelligence-act

 

 

    Copyright on advertisement text
    Read more
    Hollanda’da şirket kurmak
    Read more
    What are the trademark registration requirements?
    Read more
    Marka tescil şartlar nelerdir?
    Read more
    Neden bir kelimeyi veya logoyu marka olarak tescil ettirmelisiniz?
    Read more
    Why should you register a word or logo as a trademark?
    Read more
    AI in Pharma
    Read more
    Indemnification and IP infringement: a matter regarding shoes
    Read more
    What information needs to be included in a privacy policy?
    Read more
    Burden of proving genuine use
    Read more
    Data
    Read more
    There are already rules for AI applications
    Read more
    AI: Supervision and Toolbox
    Read more
    The same trade name does not constitute an infringement. How can that be?
    Read more
    Infringement of descriptive trade name possible after all
    Read more
    Design right on furniture: infringement or not?
    Read more
    Distribution agreement
    Read more
    Licence agreement
    Read more
    Fashion & Design
    Read more
    Competitor's use of a brand in advertising
    Read more
    Advertising
    Read more
    Software
    Read more
    IT-right
    Read more
    Slavish imitation
    Read more
    Trade secrets
    Read more
    Trade names
    Read more
    Domain names
    Read more
    Copyright
    Read more
    Trademark and design
    Read more
    Consent under the GDPR: things to keep in mind
    Read more
    Intellectual property
    Read more
    When are you allowed to decompile software?
    Read more
    Choices when choosing cloud services
    Read more
    We already have rules for AI systems
    Read more
    Exploring the legal boundaries of Synthetic Data
    Read more
    Risk check for AI applications
    Read more
    Legal Department-as-a-service
    Read more
    New European rules for Artificial Intelligence
    Read more
    Nieuwsbrief BG Tech: de toekomstige uitdagingen in de IP & Technologie & gratis Webinar
    Read more
    Data breaches under the GDPR
    Read more
    European Perspectives on AI Medical Devices
    Read more
    BG.tech
    Read more
    Vacatures
    Read more