Blog
                            
                    AI in Pharma

                                Published
                                08 Aug 2022
                            
                    
Deadly diseases still exist to this day and they need to be addressed. Thus, we have to predict, prevent and respond to the outbreak of deadly viruses like COVID-19. Artificial Intelligence (‘AI’) technology may be of help here, as it combines the computational capacity of a computer with the reasoning ability of humans. AI technologies (e.g. Machine Learning, Natural Language Processing, Deep Learning) are applied in all stages of drug discovery, for example: target validation, identification of biomarkers, lead optimization, annotation and analysis of clinical data in trails, pharmacovigilance and clinical use optimization. Nonetheless, these AI applications create both regulatory and legal challenges.
The AI Act, an initiative of the European Commission, will also have its effect on the application of AI in the pharmaceutical industry. This legal framework addresses, inter alia, the risks specifically created by AI applications. Particularly when AI systems are applied in a pharmaceutical setting (e.g. health apps), or when software is classified as a medical device on the basis of the Medical Device Regulation, the integrated AI system will fall within the high risk category. Consequently, AI systems in the field of pharmaceutics have to comply with the strict requirements set in place for a high risk AI system.
Additionally, the AI Act introduces the AI regulatory Sandbox. This is an instrument where innovators and regulators may connect with one another. Subsequently, it provides a controlled environment for them to cooperate. It should facilitate the development, testing and validation of innovative AI systems, but simultaneously ensuring compliance with the AI Act. Therefore, this instrument offers interesting possibilities for AI systems used in pharma. See this blog for more information on the AI Act.
     
Regulatory challenges
- Regulators need to take into consideration the context of use of the AI system and its software and algorithm(s);
 - Validation may require access to the underlying algorithm and datasets by regulators;
 - Updates to the AI software or hardware would need re-testing or bridging studies to ensure reproducibility/validation.
 
- The challenge lies in creating the right balance between AI and human oversight of signal detection and management;
 - AI use may discover safety signals that are more difficult to detect with current methods;
 - Software updates that affect the data will need to be communicated to regulators.
 
 
Recommendations:
- The validation will require access to the employed algorithms and underlying datasets. Legal and regulatory frameworks may need to be adapted to ensure such access options.
 - Consider the concept of a Qualified Person responsible for AI/algorithm(s) oversight compliance.
 
 
Legal challenges
- Regulatory status of AI/ML software
 - Black box: explainable AI
 - Securing data against cyber attacks
 
 
1. Algorithm
 
Legal issues:
- IP related to the software/data sets used to develop and train the algorithm.
 - IP related to AI and the outcome of the AI system (who has rights to the newly structured data, the correlations derived from the data, the compounds discovered).
 - Exit rights: can the pharmaceutical company transfer the algorithm/data to another data science company.
 - If the data science company may use the knowhow for other customers.
 - Liability for the outcome of the AI system.
 - Non-performance of the AI system.
 - Explainable XAI: black box prediction model.
 
 
2. Dataset 
 
Legal issues:
- Privacy: does the dataset contain personal (health) data
 - Quality of the dataset used is bias
 - Data sharing: conditions for data sharing
 - IP related to the data set
 - IP related to the AI system
 - Liability for the outcome of the AI system
 
 
Contact