Risk check for AI applications

17 Aug 2021

Without us knowing it often, we use products and services on a daily basis where artificial intelligence ("AI") has been applied. Such as speech recognition in the car, chatbots on websites, diagnosis of cancer cells and automated decision-making. Because more and more parties, both commercial parties and governments, are getting more data available, it can be used to make a model in which predictions can be made. AI is used to create that model.

For developers of AI applications, clients of developers of AI applications and those who use AI applications, the question then is which laws and regulations an AI application must comply with. Where does an AI application forms a risk, how big or small is that risk and how can a risk be mitigated or eliminated? But also questions about Intellectual Property aspects (is an IP application or the results of an AI application protected by an Intellectual Property right / trade secret), competition aspects (can you refuse to share an AI application with competitors), liability questions (who is liable for damages by / with an AI application) and 'civil law questions' (who is 'owner of the (existing / new) data, who is allowed to do what with the data, what happens to the data/algorithm after the end of the use of an AI application, can I establish a lien on an algorithm / AI application / data set).

BG.legal can carry out an AI risk check ("AI Risk Assessment") and come up with an advice on how to mitigate or eliminate any risks.

What does it mean exactly?

Often there are already laws and regulations that apply to AI applications. Such as, for example, the General Data Protection Regulation, the Medical Device Regulation, the Constitution/Charter of Fundamental rights of the European Union and product liability regulations. But for many aspects, there is still no regulation. There is regulation to come, like the proposal for a European AI Regulation. See our blog about this proposal.

In an AI Risk Assessment, we analyse for a specific AI application whether it complies with current laws and regulations and with the proposed EU AI Regulation. This means that we assess against the three components:

  1. legal – all applicable laws and regulations are complied with;
  2. ethical – ethical principles and values are respected;
  3. robust – the AI application is robust both from a technical (cyber security) and a social point of view

In the concrete advice we indicate how risks can be eliminated or mitigated.

How it works

To carry out the AI Risk Assessment, we use a model in which we take the following steps:

  1. performing a pre-test: is it necessary to perform an AI Risk Assessment? If the risks are very limited, then perhaps it is not necessary to perform an AI Risk Assessment.
  2. Performing Risk Assessment: together with the client, we determine in advance the team of the client with whom we carry out the assessment, how we will carry it out, whether external parties will be part of the team (ethicists, information security experts, etc.).
  3. After the assessment, the client receives a report in which we have outlined the risks of the AI application in question with recommendations on how risks can be mitigated.
  4. After measures have been taken in which risks have been mitigated, we can carry out the AI Risk Assessment again and issue a new report.

The report can be shared with external parties such as (potential) clients.

Why have bg.legel perform the AI Risk Assessment?

BG.legal has a team consisting of lawyers and a data scientist, which focuses on the legal aspects of data/AI. We have advised clients on these topics for several years. Our clients are companies (startups, scale-ups and SMEs), governments and knowledge institutions. Sometimes they develop AI applications and sometimes they have AI applications developed or they are a customer/user of an AI application.

BG.legal has developed the knowledge platform legalAIR (www.legalair.nl).  Jos van der Wijst, head of the BG.tech team, coordinates the activities in the field of legal aspects of AI for the Dutch AI coalition. He is part of the core team Human-oriented AI of the NL AIC.

BG.legal has the knowledge and experience to perform an AI Riks Assessment.

What does it cost?

The costs of performing an AI Risk Assessment depend on the nature and size of the AI application. After an initial meeting, we will make a quotation for the costs.

More information?

For more information, please contact Jos van der Wijst:

M           : +31 (0)650695916
E             :  wijst@bg.legal

Jos van der Wijst