New European rules for Artificial Intelligence

20 May 2021

On April 21, 2021, the European Commission presented a proposal with new rules for Artificial Intelligence (AI).

The main points in this proposal are explained in this first article.

Why new rules?

AI is already widely used, often without us realizing it. Most AI systems do not pose a risk to users, but that does not apply to all AI systems. Existing regulations are insufficient to guarantee the safety of users and their fundamental rights. This can jeopardize confidence in AI.

Which risk categories?

The European Commission proposes an approach based on risks, with four risk levels:

unacceptable risk

A very limited number of AI applications are classified as an unacceptable risk. These violate fundamental rights and are therefore prohibited. As an example, the Committee mentions the social labeling of citizens by governments and remote biometric identification in public spaces. A few exceptions have been made to the latter.

high risk

A slightly larger number of AI applications poses a high risk. These are described in the proposal. They pose a high risk because they have an impact on fundamental rights. The list of these AI applications can be changed over time.

These high-risk AI applications must meet several mandatory conditions. These conditions include quality requirements for the dataset used, technical documentation, transparency and information provision to users, human supervision, and robustness, accuracy, and cybersecurity. National supervisors will be given rights to investigate with regard to these requirements.

limited risk

A larger group of AI applications pose a limited risk. Transparency will suffice here. The committee cites chatbots as an example, with users needing to know that they are communicating with a chatbot.

minimal risk

For all other AI applications, the existing laws and regulations are sufficient. Most current AI applications fall into this category.

How do you categorize AI products?

The committee has come up with a method for categorizing AI applications in one of the four risk levels. Its purpose is to provide security for companies and others. The risk is assessed based on the intended use. This means that the following factors are looked at:

- the intended purpose
- the number of people potentially affected
- the dependence of the outcome
- the irreversibility of the damage

What are the consequences for high-risk AI systems?

Before these high-risk AI systems can be used, their compliance with the regulations must be investigated. This investigation must show that the AI system is compliant with the requirements regarding data quality, documentation and traceability, transparency, human supervision, accuracy and robustness. In case of some AI systems, a "notified body" will have to be involved. A risk management system for these AI systems must also be set up by the supplier.

Who will enforce these rules?

Member States will have to designate an authority to monitor compliance.

Codes of Conduct

Suppliers of high-risk AI systems can create a voluntary code of conduct for the safe application of AI systems. The Commission is encouraging the industry to come up with these codes.

Who is liable when importing AI systems?

The importer of AI systems into the EU is responsible for the imported AI system. It must ensure that the producer is compliant with EU regulations and has a CE mark.

What is the sanction?

Violation of these regulations can be sanctioned with a fine of up to 6% of the annual turnover in the previous calendar year.

This was a first analysis of the Commission proposal. A more detailed analysis will follow later. An analysis of the proposed new machinery regulation will also follow later.

For more information about legal/ethical aspects of AI we are developing LegalAIR. This platform will provide practical information and tools on how to deal with AI and AI systems.

For more questions, please contact Jos van der Wijst (wijst@bg.legal).

Jos van der Wijst