The European strategy of regulation on artificial intelligence

8 min read

White & Case Tech Newsflash

On 12 February 2019, the European Parliament adopted a Resolution on a comprehensive European industrial policy on artificial intelligence (AI) and robotics1. After describing AI as "one of the strategic technologies of the 21st century"2, the European Parliament presented several recommendations to the Member States. This Resolution underlines the need to close the European gap with North America and Asia-Pacific, and promotes a coordinated approach at the European level "to be able to compete with the massive investments made by third countries, especially the US and China"3. Europe is well behind in private investments in AI, with €2.4 to €3.2 billion in 2016, as opposed to €6.5 to €9.7 billion in Asia-Pacific and €12.1 to €18.6 billion in North America. To address this challenge, the European Parliament develops a general approach based on a strategic regulatory environment for AI and encourages strong user protections.

An existing legislative framework Throughout the Resolution, the European Parliament repeatedly notes that "existing legal schemes and doctrines can be readily applied"4, hence at this stage, various provisions are likely to regulate the functioning of AI technologies: databases, which are essential elements allowing AI to operate5; software, which is the essence of AI6; and personal data—are all governed by specific European provisions7.

However, the European Parliament notes the absence of any specific provision on liability, which compromises legal certainty. Although civil rules currently do apply, there is a risk that such rules will become inadequate and insufficient, given the specific nature of AI.

With regard to connected/automated vehicles, the European Parliament points to the fact that some Member States have already adopted national legislation in this area, which could result in a "patchwork of national legislation hampering the development of autonomous vehicles" and therefore it calls for a single set of European rules in order to avoid "over-regulation in robotics and AI systems"8.

The European Parliament also devotes an entire section of the Resolution to cybersecurity, which is an important aspect of AI since "AI can simultaneously be a cybersecurity threat and the tool for fighting cyberattacks"9. Today, it seems essential to prevent loopholes, cyberattacks and misuse of AI, by implementing "product safety controls by market surveillance authorities and consumer protection rules." Specifically, the Parliament recommends that the EU invest in its technological independence and to develop its own infrastructure, data centers, cloud systems and components, such as graphics processors and chips10.

Without contemplating any new specific provision for AI in the near future, the European Parliament recommends that the European Commission "regularly re-evaluate current legislation to ensure that it is fit for purpose with respect to AI while also respecting EU fundamental values, and to seek to amend or substitute new proposals where this is shown not to be the case"11, and to monitor the relevance and effectiveness of intellectual property rules12.

The European Parliament notably welcomes the use of "regulatory sandboxes," which consist of offering to AI operators an opportunity to test, under real conditions, the safety and efficacy of their technologies by temporarily releasing them from regulatory constraints13.

The importance of humanity and ethics

Two points on which the Parliament is concerned are humanity and ethics, to which the Resolution devotes an entire chapter. It strongly advocates "a human-centric technology,"14 which would avoid possible misuses of AI technologies to the detriment of fundamental rights. Throughout the Resolution, the European Parliament insists on the prevalence of human over computer systems, which is based on "the 'man operates machine' principle of responsibility,"15 and recommends that "humans must always be ultimately responsible for decision-making" 16.

In this respect, the Resolution promotes a principle of transparency and denounces the risks associated with "static and opaque"17 algorithms. According to the European Parliament, "any AI system must be developed with respect for the principles of transparency and algorithmic accountability allowing for human understanding of its actions"18. The European Parliament considers on this point that disclosure of the algorithm would discourage companies from research & development19.

The Resolution therefore proposes to establish a legal framework based on the notion of ethics, by promoting "ethical by design"20 technologies, closed to the privacy-by-design concept developed by the General Data Protection Regulation.

Next steps

The European Parliament calls on the European Commission to work on the development of a strong EU leadership and to ensure coherent national-level policies.

Following this Resolution, on 8 April 2019, the European Commission launched a pilot phase in view of implementing ethical guidelines. Such phase is part of the AI strategy presented by the European Commission in April 2018, which aims at increasing public and private investments in the sector of AI, with the objective of reaching €20 million per year over the next ten years21. Since then, the European Commission has continued promoting the deployment of a European strategy on AI: "A European approach on AI will boost the European Union's competitiveness and ensure trust based on European values"22.

More specifically, the pilot phase is based on the work of the High-Level Expert Group on Artificial Intelligence (AI HLEG), an independent expert group set up by the European Commission in June 2018, which published the Ethics Guidelines for Trustworthy AI. The expert group follows a legal framework for achieving "trustworthy" AI based on three components: (i) legality (it should comply with applicable law and regulations); (ii) ethics (it should ensure that ethical values are respected); and (iii) robustness.

It is precisely on the aspects related to the ethics and robustness of AI that the guidelines specify their conditions of implementation. From a practical point of view, the guidelines state that "ideally, all three components work in harmony and overlap their operations. If, in practice, tensions arise between these components, society should endeavor to align them."

The guidelines also identify nine key requirements necessary to ensure "trustworthy" AI: (1) human agency and oversight; (2) robustness and safety; (3) privacy and data governance; (4) transparency; (5) diversity; (6) non-discrimination and fairness; (7) societal and environmental well-being; (8) accountability; and (9) responsibility.

In addition, the expert group sets out several principles considered essential for the development and use of a "trustworthy" AI, in particular, (i) respect for human autonomy; (ii) prevention of harm; (iii) respect for equity; and (iv) "explainability"23 of the AI technologies.

Although these guidelines seem rather theoretical, the European Commission has made a practical effort to provide operational applications for these principles and an assessment list addressed to IA actors to check whether they comply with these requirements.

AI actors are therefore highly encouraged to implement these requirements, both technically and non-technically, and to provide their results within the framework of the European Alliance for AI24. Based on their feedback and contributions, a new version of the guidelines will be presented to the European Commission at the beginning of 2020.


European institutions aim at encouraging research and discussions on an ethical framework for AI technologies, both at a European and international level. They actively promote the conduct of global work, especially in the OECD and G20, to ensure that the EU remains competitive but also shares the benefits of AI development as widely as possible. These initiatives reflect that European institutions are more than ever striving to position themselves in the AI sector by adopting a particular approach based on the involvement of stakeholders so that going forward they can contribute effectively to the evolution of AI regulation.


1 European Parliament Resolution of 12 February 2019 on a comprehensive European industrial policy on artificial intelligence and robotics 2018/2088 (INI).
2 Pt D of the Resolution.
3 Pt AF. and I of the Resolution.
4 Pt. 136 of the Resolution.
5 Directive 96/9/EC of the European Parliament and of the Council of 11 March 1996 transposed notably into the French Code of Intellectual Property, Articles L. 341-1 et seq.
6 Directive 91/250 of the Council of the European Communities of 14 May 1991 on the legal protection of computer programs This Directive was transposed into the French Code of Intellectual Property, Articles L. 122-6 et seq.
7 When personal data are collected, full compliance with the EU legal framework on the protection of personal data.
8 Pt 120 of the Resolution.
9 "The building of a trusted ecosystem for the development of AI technology should be based on data policy architecture", Pt 102 of the Resolution.
10 Pt 103 of the Resolution.
11 Pt. 114 of the Resolution.
12 Pt. 136 of the Resolution
13 Report "Giving meaning to artificial intelligence", by the French deputy Cédric Vilani. In France, Law No. 2016-1321 of 7 October 2016 for a Digital Republic took a first step forward by setting forth possibilities of experimentation in the telecoms sector.
14 Pt 5.1 of the Resolution.
15 Pt. AK of the Resolution.
16 Pt 123 of the Resolution.
17 Pt 156 of the Resolution.
18 Pt 158 of the Resolution.
19 Pt 166 and 169 of the Resolution.
20 Section 5.2 of the Resolution.
21 Press release of the European Commission, 8 April 2019.
22 Factsheet: Artificial Intelligence for Europe, 5 Avril 2019.
23 Page 18 of the Ethics Guidelines for Trustworthy AI: "Explainability concerns the ability to explain both the technical processes of an AI system and the related human decisions (e.g., application areas of a system)".
24 A forum of discussion on artificial intelligence development and its impacts, created on 2018 and ruled by the AI HLEG.


This publication is provided for your convenience and does not constitute legal advice. This publication is protected by copyright.
© 2019 White & Case LLP