New European rules for Artificial Intelligence

20 May 2021

On April 21, 2021, the European Commission presented a proposal with new rules for Artificial Intelligence (AI).

The main points in this proposal are explained in this first article.

Why new rules?

AI is already widely used, often without us realizing it. Most AI systems do not pose a risk to users, but that does not apply to all AI systems. Existing regulations are insufficient to guarantee the safety of users and their fundamental rights. This can jeopardize confidence in AI.

Which risk categories?

The European Commission proposes an approach based on risks, with four risk levels:

unacceptable risk

A very limited number of AI applications are classified as an unacceptable risk. These violate fundamental rights and are therefore prohibited. As an example, the Committee mentions the social labeling of citizens by governments and remote biometric identification in public spaces. A few exceptions have been made to the latter.

high risk

A slightly larger number of AI applications poses a high risk. These are described in the proposal. They pose a high risk because they have an impact on fundamental rights. The list of these AI applications can be changed over time.

These high-risk AI applications must meet several mandatory conditions. These conditions include quality requirements for the dataset used, technical documentation, transparency and information provision to users, human supervision, and robustness, accuracy, and cybersecurity. National supervisors will be given rights to investigate with regard to these requirements.

limited risk

A larger group of AI applications pose a limited risk. Transparency will suffice here. The committee cites chatbots as an example, with users needing to know that they are communicating with a chatbot.

minimal risk

For all other AI applications, the existing laws and regulations are sufficient. Most current AI applications fall into this category.

How do you categorize AI products?

The committee has come up with a method for categorizing AI applications in one of the four risk levels. Its purpose is to provide security for companies and others. The risk is assessed based on the intended use. This means that the following factors are looked at:

- the intended purpose
- the number of people potentially affected
- the dependence of the outcome
- the irreversibility of the damage

What are the consequences for high-risk AI systems?

Before these high-risk AI systems can be used, their compliance with the regulations must be investigated. This investigation must show that the AI system is compliant with the requirements regarding data quality, documentation and traceability, transparency, human supervision, accuracy and robustness. In case of some AI systems, a "notified body" will have to be involved. A risk management system for these AI systems must also be set up by the supplier.

Who will enforce these rules?

Member States will have to designate an authority to monitor compliance.

Codes of Conduct

Suppliers of high-risk AI systems can create a voluntary code of conduct for the safe application of AI systems. The Commission is encouraging the industry to come up with these codes.

Who is liable when importing AI systems?

The importer of AI systems into the EU is responsible for the imported AI system. It must ensure that the producer is compliant with EU regulations and has a CE mark.

What is the sanction?

Violation of these regulations can be sanctioned with a fine of up to 6% of the annual turnover in the previous calendar year.

This was a first analysis of the Commission proposal. A more detailed analysis will follow later. An analysis of the proposed new machinery regulation will also follow later.

For more information about legal/ethical aspects of AI we are developing LegalAIR. This platform will provide practical information and tools on how to deal with AI and AI systems.

For more questions, please contact Jos van der Wijst (wijst@bg.legal).

Jos van der Wijst

    Don't let your creativity go to waste with the BG.legal Marketing Helpline subscription!
    Read more
    Türk işletmeleri için Avrupa'da marka hakkının önemi
    Read more
    The importance of intellectual property rights for start-ups: protection and growth
    Read more
    Start-up girişimler için fikri mülkiyet haklarının önemi: koruma ve büyüme
    Read more
    Hollanda ve Avrupa'da markalaşma
    Read more
    Current concerns regarding the EU-US Data Privacy Framework
    Read more
    Copyright on advertisement text
    Read more
    What are the trademark registration requirements?
    Read more
    Marka tescil şartlar nelerdir?
    Read more
    Neden bir kelimeyi veya logoyu marka olarak tescil ettirmelisiniz?
    Read more
    Why should you register a word or logo as a trademark?
    Read more
    AI in Pharma
    Read more
    AI Act and Pharma / Health
    Read more
    Indemnification and IP infringement: a matter regarding shoes
    Read more
    What information needs to be included in a privacy policy?
    Read more
    Burden of proving genuine use
    Read more
    Data
    Read more
    There are already rules for AI applications
    Read more
    AI: Supervision and Toolbox
    Read more
    The same trade name does not constitute an infringement. How can that be?
    Read more
    Infringement of descriptive trade name possible after all
    Read more
    Design right on furniture: infringement or not?
    Read more
    Distribution agreement
    Read more
    Licence agreement
    Read more
    Fashion & Design
    Read more
    Competitor's use of a brand in advertising
    Read more
    Advertising
    Read more
    Software
    Read more
    IT-right
    Read more
    Slavish imitation
    Read more
    Trade secrets
    Read more
    Trade names
    Read more
    Domain names
    Read more
    Copyright
    Read more
    Trademark and design
    Read more
    Consent under the GDPR: things to keep in mind
    Read more
    Intellectual property
    Read more
    When are you allowed to decompile software?
    Read more
    Choices when choosing cloud services
    Read more
    We already have rules for AI systems
    Read more
    Exploring the legal boundaries of Synthetic Data
    Read more
    Risk check for AI applications
    Read more
    Legal Department-as-a-service
    Read more
    Nieuwsbrief BG Tech: de toekomstige uitdagingen in de IP & Technologie & gratis Webinar
    Read more
    Data breaches under the GDPR
    Read more
    European Perspectives on AI Medical Devices
    Read more
    BG.tech
    Read more
    Vacatures
    Read more