The European strategy for AI was presented in 2018 and the development of artificial intelligence has since been set as a top priority of the European Union (EU) and its member states. However, as set out by the European Commission, progress in the area faces some ethical and legal challenges. Those that are emphasized in the strategy are the safety and liability of AI algorithms. 

The current article will briefly explain the current state of the relevant European regulations and will try to offer a look in the potential future of statutory compliance of technologies.

 

Safety of AI algorithms

Before actually examining the problem we have to mention that lawmakers in the EU are still working on the very definition of AI. Perhaps the best reference we can quote for such definition is the one provided by the High Level Expert Group that works on the subject: 

“Artificial intelligence (AI) systems are software (and possibly also hardware) systems designed by humans that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected structured or unstructured data, reasoning on the knowledge, or processing the information, derived from this data and deciding the best action(s) to take to achieve the given goal. AI systems can either use symbolic rules or learn a numeric model, and they can also adapt their behaviour by analysing how the environment is affected by their previous actions.”

This technical definition is something to keep in mind when considering the legal issues with AI algorithms in the EU.

 

The current regulation on safety

The European Union has set up a safety and product liability regulatory framework along with a robust body of safety standards, complemented by national, non-harmonised liability legislation [1]. The most important acts that govern the statutory compliance of AI are the Machinery Directive [2], the General Product Safety Directive [3] and the Product Liability Directive [4].

The safety related regulatory framework aims to ensure that products placed on the Union market meet high health, safety and environmental requirements. Shortly put all consumer products that are to be offered on the EU market need to be safe and this of course applies to AI algorithms as well. Product safety in Europe is most often ensured through a process of standardisation. Since all regulations have been written before the emergence of AI technologies these standards and general rules are rarely addressing specific issues with the technology but rather general safety challenges.

In terms of product safety the regulations of the EU provide 2 phases of assessment:

  1. Ex ante – Safety assessment in design and manufacturing;
  2. Ex post – Market surveillance after a product has been launched

The first phase is the more important one. Developers of AI algorithms have to abide by the technical details embedded in relevant EU and national laws as well as the voluntary standards published in the office journal of the EU [5]. This phase involves the self-assessment by the manufacturer and also a third party certification or some other sort of third party involvement in the manufacturing process.

 

Practical examples of safety issues with AI algorithms

So far we have only been able to identify two cases of safety issues with products that imply at least partially some contemporary technologies (Internet of Things). These are reported through the European Rapid Alert system (Rapex):

  1. A smart watch for children that has been introduced to the market in Iceland [6].
  2. A case in Germany regarding passenger cars with security gaps in the radio control software [7].

An example of an issue that derives from the nature and functions of the algorithms has not been reported so far but some hypothetical cases have been discussed. Perhaps the most popular one is the potential discrimination of ML algorithms. An algorithm that has been provided certain data may show bias, for example bias towards race, gender or an ethnic group.

A report has been published in August 2020 in Germany that explores in detail the topic of the risk of discrimination through the use of algorithms [8]. One example for a potential discrimination though the use of AI has been found in an algorithm developed by Amazon for search and evaluation of CVs of potential employees. Upon assessment of the system it was established that it may not be gender neutral since the data that has been provided to the algorithm has been predominantly CVs of male employees. The development of the system was later suspended by Amazon and currently this only remains as a case study.

 

Liability of manufacturers of AI algorithms

The product safety and liability regulations in the EU complement each other. The civil liability of manufacturers and distributors of products is regulated by the Product Liability Directive. The two main goals of these regulations are:

  1. Ensuring compensations of victims;
  2. Providing economic incentives for manufacturers to avoid causing damage in the first place.

Under European law manufacturers of products are liable for damages caused by defects in their products. This is called civil liability and is considered a cornerstone in the consumer protection regulations of the European Union.

 

Liability in SaaP and SaaS

Something to consider in advance regarding the liability of software development companies in general is that the Product Liability Directive applies to software as a product (SaaP) and not to software as a service (SaaS). In the case of the latter the liability is contractual since the terms of use of the software are agreed upon in a contractual relationship (most often a license agreement). This is when liability disclaimers are used. 

Under the Product Liability Directive producers are liable for damages caused by a defect in their product independently of whether the defect is due to negligence or ill-intent. Producers are liable without fault on their part. A very important exemption in terms of liability is that a software producer is not liable for damages if the state of scientific and technical knowledge at the time when he put the product into circulation was not such as to enable the existence of the defect to be discovered (the so-called Development Risk Clause). This is important for AI algorithms since it is often practically impossible to test an algorithm with all possible data that can be provided to it. Member states of the EU however, have the right to adopt a derogation of the Development Risk Clause and some (Finland, France, Hungary, Luxembourg and Spain) have decided to do so.

 

Practical examples for liability of producers of AI algorithms

Under the European regulations software development companies are liable for damages that their products have caused to the clients. A good example to focus on is the car industry and specifically self-driving cars. The civil liability here is quite complex due to the involvement of insurers but let’s try and have a detailed look into a hypothetical car accident that has been caused due to the malfunctioning of the AI algorithm that decides if the car is to stop to prevent a collision or not. 

The victim of the car accident is entitled to claim damages from the insurer of the self driving car. The insurer now suffers damages – the insurance compensation paid to the victim. Since the accident has been caused by a defect in the car, the insurer is provided with a claim against the car producer. Now usually the car producer will be the last in the liability chain but let’s imagine that this certain producer has not developed their own AI for self-driving cars but rather bought one from a software development company. Now contractual liability comes into play and it is important what kind of warranties have been provided under the agreement and whether the software company can be held liable for the malfunctioning of the car in the first place. Car producers are not consumers and the Product liability directive does not apply so it is important what is written in the license agreement.

These cases are rarely straightforward and it is impossible and wrong to make a general conclusion on the limits of contractual liability of software development companies. The best answer to such a question will always be it depends.

 

A look into the future

The rapid development of artificial intelligence technologies presents not only safety challenges for manufacturers of these technologies but also legislative challenges for legislators. As mentioned above the current regulatory framework of the EU precedes the emerging of AI algorithms and thus a lot of ambiguity still exists in the area.

Since the development of AI has been set as a focus area for the EU we may expect regulations to also develop with time. However, this is a slow process and in terms of technological progress laws are always at least one step behind. In the near future it is likely that major regulations are preceded by industry standards. 

As has been the case for many emerging technologies, future regulations will almost always be focused on privacy protection, cyber security (prevention of malicious use of AI) and consumer protection.

 

References:

  1. Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics – COM(2020) 64 final
  2. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32006L0042
  3. https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=CELEX%3A32001L0095
  4. https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=CELEX%3A31985L0374
  5. https://ec.europa.eu/growth/single-market/european-standards/harmonised-standards_en
  6. RAPEX notification from Iceland published in the EU Safety Gate’s website (A12/0157/19)
  7. RAPEX notification from Germany published in the EU Safety Gate (A12/1671/15)
  8. https://www.antidiskriminierungsstelle.de/SharedDocs/Downloads/EN/publikationen/Studie_en_Diskriminierungsrisiken_durch_Verwendung_von_Algorithmen.html

 

— — —

We put a lot of effort in the content creation in our blog. Multiple information sources are used, we do our own analysis and always double check what we have written down. However, it is still possible that factual or other mistakes occur. If you choose to use what is written on our blog in your own business or personal activities, you do so at your own risk. Be aware that Perelik Soft Ltd. is not liable for any direct or indirect damages you may suffer regarding the use of the content of our blog.