Blog
                            
                    Risk check for AI applications

                                Published
                                17 Aug 2021
                            
                    
Without us knowing it often, we use products and services on a daily basis where artificial intelligence ("AI") has been applied. Such as speech recognition in the car, chatbots on websites, diagnosis of cancer cells and automated decision-making. Because more and more parties, both commercial parties and governments, are getting more data available, it can be used to make a model in which predictions can be made. AI is used to create that model.
For developers of AI applications, clients of developers of AI applications and those who use AI applications, the question then is which laws and regulations an AI application must comply with. Where does an AI application forms a risk, how big or small is that risk and how can a risk be mitigated or eliminated? But also questions about Intellectual Property aspects (is an IP application or the results of an AI application protected by an Intellectual Property right / trade secret), competition aspects (can you refuse to share an AI application with competitors), liability questions (who is liable for damages by / with an AI application) and 'civil law questions' (who is 'owner of the (existing / new) data, who is allowed to do what with the data, what happens to the data/algorithm after the end of the use of an AI application, can I establish a lien on an algorithm / AI application / data set).
BG.legal can carry out an AI risk check ("AI Risk Assessment") and come up with an advice on how to mitigate or eliminate any risks.
     
What does it mean exactly?
- legal – all applicable laws and regulations are complied with;
 - ethical – ethical principles and values are respected;
 - robust – the AI application is robust both from a technical (cyber security) and a social point of view
 
 
How it works
- performing a pre-test: is it necessary to perform an AI Risk Assessment? If the risks are very limited, then perhaps it is not necessary to perform an AI Risk Assessment.
 - Performing Risk Assessment: together with the client, we determine in advance the team of the client with whom we carry out the assessment, how we will carry it out, whether external parties will be part of the team (ethicists, information security experts, etc.).
 - After the assessment, the client receives a report in which we have outlined the risks of the AI application in question with recommendations on how risks can be mitigated.
 - After measures have been taken in which risks have been mitigated, we can carry out the AI Risk Assessment again and issue a new report.
 
 
Why have bg.legel perform the AI Risk Assessment?
 
What does it cost?
 
More information?