AI in Pharma

08 Aug 2022

Deadly diseases still exist to this day and they need to be addressed. Thus, we have to predict, prevent and respond to the outbreak of deadly viruses like COVID-19. Artificial Intelligence (‘AI’) technology may be of help here, as it combines the computational capacity of a computer with the reasoning ability of humans. AI technologies (e.g. Machine Learning, Natural Language Processing, Deep Learning) are applied in all stages of drug discovery, for example: target validation, identification of biomarkers, lead optimization, annotation and analysis of clinical data in trails, pharmacovigilance and clinical use optimization. Nonetheless, these AI applications create both regulatory and legal challenges.

The AI Act, an initiative of the European Commission, will also have its effect on the application of AI in the pharmaceutical industry. This legal framework addresses, inter alia, the risks specifically created by AI applications. Particularly when AI systems are applied in a pharmaceutical setting (e.g. health apps), or when software is classified as a medical device on the basis of the Medical Device Regulation, the integrated AI system will fall within the high risk category. Consequently, AI systems in the field of pharmaceutics have to comply with the strict requirements set in place for a high risk AI system.

Additionally, the AI Act introduces the AI regulatory Sandbox. This is an instrument where innovators and regulators may connect with one another. Subsequently, it provides a controlled environment for them to cooperate. It should facilitate the development, testing and validation of innovative AI systems, but simultaneously ensuring compliance with the AI Act. Therefore, this instrument offers interesting possibilities for AI systems used in pharma. See this blog for more information on the AI Act.

Regulatory challenges

The International Coalition of Medicines Regulatory Authorities (ICMRA) has made a report of a horizontal scanning exercise in AI.[1] In order to examine the challenges the use of AI poses to global medicine regulation, two hypothetical case studies were developed.

a. AI in clinical medicine development and use: A Central Nervous System App

  • Regulators need to take into consideration the context of use of the AI system and its software and algorithm(s);
  • Validation may require access to the underlying algorithm and datasets by regulators;
  • Updates to the AI software or hardware would need re-testing or bridging studies to ensure reproducibility/validation.

b. Hypothetical case study: AI in pharmacovigilance

  • The challenge lies in creating the right balance between AI and human oversight of signal detection and management;
  • AI use may discover safety signals that are more difficult to detect with current methods;
  • Software updates that affect the data will need to be communicated to regulators.

Recommendations:

  • The validation will require access to the employed algorithms and underlying datasets. Legal and regulatory frameworks may need to be adapted to ensure such access options.
  • Consider the concept of a Qualified Person responsible for AI/algorithm(s) oversight compliance.

Legal challenges

  • Regulatory status of AI/ML software
  • Black box: explainable AI
  • Securing data against cyber attacks

1. Algorithm

In most cases the algorithm (software) will be developed by a data science company (and not by the pharmaceutical company). Therefore, parties need to address issues like liability, a license for the use of the software service/updates of the software by way of an agreement. Furthermore, the data science company needs to develop the software in such a way that the pharmaceutical company can meet the regulatory requirements (transparency, explainability and a possible regulatory access to the employed algorithms and underlying datasets). The pharmaceutical company will also need a license to use the software. This license agreement needs to address issues regarding ownership and usage rights. One of the reasons for this is that, from a legal perspective, ‘ownership’ (in Dutch: ‘eigendom’) of data is not possible. On the other hand, the data science company may want to use the algorithm or knowhow related to developing the algorithm, for other customers. This therefore needs to be addressed in the agreement.

Legal issues:

  • IP related to the software/data sets used to develop and train the algorithm.
  • IP related to AI and the outcome of the AI system (who has rights to the newly structured data, the correlations derived from the data, the compounds discovered).
  • Exit rights: can the pharmaceutical company transfer the algorithm/data to another data science company.
  • If the data science company may use the knowhow for other customers.
  • Liability for the outcome of the AI system.
  • Non-performance of the AI system.
  • Explainable XAI: black box prediction model.

2. Dataset

There is a rapid growth in all kind of datasets that are useful in the process of drug discovery. In each phase of drug discovery different sources are used. The legal challenges that will arise in a certain case will depend on the nature of the dataset. In most cases the dataset will contain non-personal (health) data. When AI is used for target identification or lead optimization, the data set will not contain personal data. However, if AI is used for disease identification of in a diagnostic contact or in clinical trials, the data will contain personal (health) data. This will also be the case in the phase of monitoring the medicines or the medical treatment. This is especially true for rare diseases during this phase, as the number of patients will be small. For this reason, it will become increasingly difficult to anonymize the data in a way that the data cannot be linked to a specific person. When the dataset does contain personal (health) data, ethical aspects need to be taken into consideration:
transparency and explainability (problem of opacity); equality, fairness and non-discrimination (problem of bias) and data quality and data sharing (problem of privacy).

Legal issues:

  • Privacy: does the dataset contain personal (health) data
  • Quality of the dataset used is bias
  • Data sharing: conditions for data sharing
  • IP related to the data set
  • IP related to the AI system
  • Liability for the outcome of the AI system

Contact

For more information in relation to AI in pharma, contact Jos van der Wijst.

[1] ICMRA, Informal Innovation Network, Horizon Scanning Assessment Report – Artificial Intelligence, 6 August 2021

 

    Copyright on advertisement text
    Read more
    Hollanda’da şirket kurmak
    Read more
    What are the trademark registration requirements?
    Read more
    Marka tescil şartlar nelerdir?
    Read more
    Neden bir kelimeyi veya logoyu marka olarak tescil ettirmelisiniz?
    Read more
    Why should you register a word or logo as a trademark?
    Read more
    AI Act and Pharma / Health
    Read more
    Indemnification and IP infringement: a matter regarding shoes
    Read more
    What information needs to be included in a privacy policy?
    Read more
    Burden of proving genuine use
    Read more
    Data
    Read more
    There are already rules for AI applications
    Read more
    AI: Supervision and Toolbox
    Read more
    The same trade name does not constitute an infringement. How can that be?
    Read more
    Infringement of descriptive trade name possible after all
    Read more
    Design right on furniture: infringement or not?
    Read more
    Distribution agreement
    Read more
    Licence agreement
    Read more
    Fashion & Design
    Read more
    Competitor's use of a brand in advertising
    Read more
    Advertising
    Read more
    Software
    Read more
    IT-right
    Read more
    Slavish imitation
    Read more
    Trade secrets
    Read more
    Trade names
    Read more
    Domain names
    Read more
    Copyright
    Read more
    Trademark and design
    Read more
    Consent under the GDPR: things to keep in mind
    Read more
    Intellectual property
    Read more
    When are you allowed to decompile software?
    Read more
    Choices when choosing cloud services
    Read more
    We already have rules for AI systems
    Read more
    Exploring the legal boundaries of Synthetic Data
    Read more
    Risk check for AI applications
    Read more
    Legal Department-as-a-service
    Read more
    New European rules for Artificial Intelligence
    Read more
    Nieuwsbrief BG Tech: de toekomstige uitdagingen in de IP & Technologie & gratis Webinar
    Read more
    Data breaches under the GDPR
    Read more
    European Perspectives on AI Medical Devices
    Read more
    BG.tech
    Read more
    Vacatures
    Read more