Blog
                            
                    AI Act and Pharma / Health

                                Published
                                08 Aug 2022
                            
                    
The European Commission has proposed a legal framework on Artificial Intelligence (‘AI’). This AI Act proposes a risk based approach with clear requirements and obligations regarding specific uses of AI. More specifically, the AI Act defines 4 levels of risk in AI: (i) Unacceptable risk, (ii) High risk, (iii) Limited risk and (iv) Minimal / No risk. All AI systems need to be categorized, based on a self-assessment, in one of these levels.
Most of the AI systems used in Pharma
will probably either fall within level III or IV. If the AI system forms a component of a product (e.g. the Central Nervous System App in the ICMRA report[1]) it will probably be considered a high risk level AI System. As a result, these AI systems will be subject to strict obligations. For example:
    - Adequate risk assessment and mitigation systems;
 - High quality of the datasets feeding the system to minimize risks and discriminatory outcomes;
 - Logging of activity to ensure traceability of results;
 - Detailed documentation providing all information necessary to the system and its purpose for authorities to assess its compliance;
 - Clear and adequate information to the user;
 - Appropriate human oversight measures to minimize risk;
 - High level of robustness, security and accuracy.
 
- 
- Bias in training data may lead to discrimination and individual injury/death (i.e., racial bias may lead to incorrect diagnoses) and deepen existing socio-economic inequalities;
 - Technical system errors in AI could lead to mass patient injuries because of widespread use; Increased use and sharing of health data threatens privacy and data protection rights of patients;
 - Lack of transparency and explainability threatens patients’ rights to information and to informed consent to medical treatment;
 - Issues with cybersecurity threaten patients’ health in the case of cyberattacks on for example insulin pumps and pacemakers.
 
 
 
AI regulatory sandbox
- Early involvement of the regulator (aim of mutual learning and adaptation)
 - Faster adaptation of current regulations to suit the new product or service.
 
 
Contact