The EU’s First Mover Advantage as it Proposes a New Regulatory “System” for Artificial Intelligence
The European Union reached, on Wednesday, April 21, 2021, a new milestone in the seemingly interminable discussions on a legal and ethical framework for artificial intelligence, when it issued a proposal for a regulation, “Laying down Harmonized Rules on Artificial Intelligence”.
The regulation, proposed by the EU’s administrative body, the European Commission, under the short name “Artificial Intelligence Act”, and subject to adoption by the EU’s two legislative bodies, the European Parliament and the Council of the European Union, includes 85 articles spread over some 88 pages and nine annexes. It represents the first attempt by a political body anywhere in the world (to our knowledge, and that of the Commission) to regulate specifically and exclusively “artificial intelligence systems” (a defined term under the proposed regulation).
The new, proposed regulation culminates some three years of intense discussions concerning the potential regulation of AI amongst the EU institutions, Member States and stakeholders (as they are known in EU parlance). The key move by the European Commission to get a true substantive discussion going came in mid-2018, when it appointed a committee of independent experts, the High Level Experts Group (“AI HLEG”). The 52-member group of independent experts from across Europe was composed of academics, company executives, and trade association and NGO representatives, with no representatives of government bodies included. The AI HLEG issued, over a two-year period, a set of deliverables by which it conceived a series of concepts and principles towards a legal framework for AI, and which have served since as the foundation of the European Commission’s thinking on the issue.
The appointment of the AI HLEG showed results-oriented institutional insight: implicitly, the European Commission recognized that it did not have internal expertise on AI to guide it, and thus appointed a broad panel of outside experts that did. This same method of recourse to external expertise permeates the Artificial Intelligence Act: in particular, a European Artificial Intelligence Board will be created as the unifying body of a new EU system, and it will be authorized to seek advice from external experts..
Another key point: the Commission set up a forum of dialogue with business (and other) interests in mind by creating, in mid-2018, a membership-based information and consultation clearing house widely open to those having a vested interest in AI regulation issues. Thus, the European AI Alliance was brought into existence, composed, according to the European Commission, of some 4,000 members having access to forum events and communication channels on all subjects relating to the “technological and societal implications of AI”, and coming together in an annual assembly. This “community” method facilitated consultation procedures launched by the European Commission in connection with a series of discussion and policy papers that it started to issue, once again since 2018. Hundreds of companies from around the world were amongst more than 1,200 respondents to a formal consultation exercise in 2020, according to Commission statistics, and many companies went beyond that to participate in pilot programs to test methods of ensuring what has come to be known as “Trustworthy AI” in the language of the EU.
We have organized our comments around ten points of the new proposed regulation and its context which are relevant for business interests. They are the following:
In essence, the Artificial Intelligence Act focuses on prohibited uses and on that first, High-Risk category:
- A regulatory definition of artificial intelligence and a statutory vocabulary of related concepts
The High Level Experts Group spent six months putting together, in seven pages, a comprehensive definition of “artificial intelligence”. In the Artificial Intelligence Act, the European Commission has proposed to consolidate the legal definition down to a one sentence definition of the concept “artificial intelligence system”, with additional detail in a short annex. Here is the proposed definition, as set out in Article 3 of the Artificial Intelligence Act:
Software that is developed with one or more of the techniques and approaches listed in Annex I [that] can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations or decisions influencing the environments they interact with.
Annex I sets out, without any underlying detail, a distinction amongst three “techniques and approaches” of AI: (i) machine learning approaches, (ii) logic- and knowledge-based approaches and (iii) statistical approaches. Moreover, Article 3 sets out statutory definitions of 43 other AI‑related terms used in the proposed regulation, such as “provider of an AI system”, “conformity assessment” and “conformity assessment body”, “training data”, “validation data”, “testing data”, “input data”, “performance of an AI system”, “CE marking of conformity”, “remote biometric identification system”, “serious incident” and so on. These statutory concepts underpin the new legal system for AI set out in the new regulation.
- An EU instrument whose legal effects will extend to the entire world
As is the case with other EU horizontal regulations (such as competition, data privacy under the GDPR, chemicals under REACH), the legal effects of the new regulation will be felt worldwide, under an effects doctrine. Thus, the scope extends not only to providers of AI systems established in the territory of the EU, but also providers and users of AI systems that are “located in a third country, where the output produced by the system is used in the Union” (such as through the integration of components coming from third countries in broader products or systems placed on the market or put into service in the EU), or, simply put, where third country providers place on the market or put into service AI systems which are somehow used in the EU.
- A focus on a limited list of prohibited AI practices and on the compliance of “High-Risk AI Systems” and “remote biometric identification systems” with regulatory requirements
In their presentation of the Artificial Intelligence Act, Commission officials use the image of a four-layered pyramid: (i) at the top, a limited number of cases (four) in which recourse to use of AI systems will be prohibited in the EU; (ii) next down, the so-called High-Risk AI Systems whose typology is precisely defined, and which are subject to strict regulatory requirements; (iii) further down, a vast pool of “Limited Risk” AI system uses (such as algorithmic chat boxes), whose regulation is reduced essentially to transparency and data quality principles (concepts previously developed in detail by the AI HLEG); and (iv) “Minimal Risk” broad use applications such as smartphone and video game use algorithms, which are subject to general consumer protection principles.
- Prohibited Artificial Intelligence Practices: The list is short, inspired by the human-centric approach advanced by the AI HLEG. There are four prohibited uses of AI systems, those that: (i) are designed or used in a way to deploy “subliminal techniques” in order to “distort a person’s behavior” in a manner that causes or is likely to cause harm; (ii) are designed or used in a way so as to “target” human vulnerabilities (old age, physical or mental disability), causing such persons to distort their behavior in a way causing harm; (iii) consist of social scoring techniques introduced by public authority or on its behalf, that is to say used for “evaluation or classification of the trustworthiness of natural persons over a certain period of time based on their social behavior or known or predicted personality characteristics, with the social score leading to . . . detrimental or unfavorable treatment of certain natural persons or whole groups thereof”, either in “social contexts” unrelated to that of data generation or collection or in ways “unjustified or disproportionate to their social behavior or gravity”; or (iv) constitute the “use of real-time remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement unless and in as far as such use is strictly necessary” in accordance with a long list of detailed objectives and conditions as set out in the proposed regulation.
- Regulation of “High-Risk AI Systems”: Subject to the four cases of prohibition, high-risk systems are the true subjects of the regulatory framework set out in the proposed regulation. The concept seems narrow, but it is not: the list of high-risk systems is extremely broad and its exact scope could lead to contentious discussion. The text includes both pre-designated high-risk systems and any other systems which meet a list of harm criteria. For instance, AI systems intended to be used as safety components of products, systems, or equipment used in transport vehicles and which are subject to third party conformity testing are de facto high-risk systems. Also considered as high-risk AI systems are, inter alia, any AI systems used in: (i) (non-prohibited) biometric identification and categorization of natural persons; (ii) management and operation of critical infrastructure (roads, supply of utilities, etc.); (iii) access to education and vocational training institutions; (iv) decisions made in connection with the workspace: recruitment, selection, decisions on promotion and termination, employee task evaluation, and monitoring and evaluation of performance and behavior; (v) access to public assistance benefits, credit facilities (creditworthiness, credit scores), and emergency first response services (firefighters, medical aid); (vi) law enforcement practices and assessments, predictive criminal offenses, and natural person criminal profiling and crime analytics based on large data sets; (vii) migration, asylum, and border control management systems; and (viii) assistance in judicial settings (“in researching and interpreting facts and the law and in applying the law to a concrete set of facts”). That is a first, extensive but non-exhaustive list. The European Commission will be empowered to augment the list.
- Governance through a new institution, the European Artificial Intelligence Board, and implementation through designated “national competent authorities” (of which, a “national supervisory authority”)
The regulation proposes to set up a new governance structure, organized around a European Artificial Intelligence Board, which will advise and assist the Commission. The Board will have 29 members: one from each of the 27 EU Member States through their respective national supervisory authorities, as well as the European Data Protection Supervisor and the European Commission acting as Chair. In essence, the European Commission will exercise its powers without delegating them to the new Board, whose role is limited to two functions. First, the Board will advise and assist the Commission (presumably with the input of its own experts) through opinions and recommendations on matters concerning the implementation of the regulation. Secondly, it will ensure cooperation between the Commission and national supervisory authorities, through functions of coordination, guidance and assistance (particularly with regard to consistent application of the regulation). Basically, the Board will serve as a forum for interaction between the European Commission and the national supervisory authorities, in their enforcement role, and potentially with other EU bodies if the situation brings them to act on AI matters.
- Functional and quality requirements and new, post-market monitoring through a new “EU Database for Stand-Alone High-Risk AI Systems”, information sharing and market surveillance
The new regulation sets out a series of requirements that have to be met by High-Risk AI Systems (including registration and data transmission requirements) and ensures in-house surveillance through risk management and quality management systems requirements as well as public monitoring through a new EU Database which will enable public consultation of all data as required of all providers of High-Risk AI Systems. The fundamental requirements derive essentially from work done by the AI HLEG and include:
- Risk management requirements on an iterative, systemic basis: (i) identification and analysis of known and foreseeable risks, estimation and evaluation of the risks; (ii) analysis of post-monitoring data with a view to anticipation of other possible risks; and (iii) structure of risk management systems.
- Data quality and data governance practices.
- Technical documentation that has been prepared in such a way as to demonstrate compliance with the Artificial Intelligence Act.
- Record-keeping through logs, with specific information requirements.
- Transparency and provision of information to users.
- Human oversight (“High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which the AI system is in use.”).
- Accuracy, robustness, and cybersecurity.
All High-Risk AI Systems will be registered and their data will be included in the new EU Database for High-Risk AI Systems. The database will be open-system: all information collected and processed will be accessible to the public. The providers of High-Risk AI Systems will have the legal responsibility to enter their own data into the system. The European Commission will provide technical and administrative support where needed.
An annex to the proposed regulation sets out the information which must be entered into the database by each provider, such as, inter alia:
- Description of intended purpose of the AI system.
- References to and copies of required certifications, and EU declarations of conformity.
- Electronic instructions for use of the AI System (except for cases of use in areas of law enforcement and migration, asylum, and border control management).
The Artificial Intelligence Act provides: “The EU database shall contain personal data only insofar as necessary for collecting and processing information in accordance with this regulation. The information shall include the names and contact details of natural persons who are responsible for registering the system and have the legal authority to represent the provider”. In view of the broad accessibility and open nature of the database, the cautionary note regarding the use of minimal and “official” personal information here is important and indicative of the European Commission’s attempts to align with the GDPR requirements.
In addition, it is the providers of HighRisk AI Systems that will have the legal obligations to put into place and document post-market monitoring. Such monitoring requirement, to be carried out in accordance with a “post-market monitoring plan” set up by each provider and communicated to the European Commission, will be intended to “actively and systematically collect, document and analyze relevant data provided by users or collected through other sources on the performance of high-risk AI systems throughout their lifetime, and to evaluate the continuous compliance of AI systems with the requirements of [the regulation]”. The post-market monitoring plan will be established on the basis of a template produced by the European Commission.
- Enforcement and penalties
The providers of High-Risk AI Systems will have the obligation to report serious incidents and malfunctions of their AI systems to national authorities, but only where such incidents or malfunctions would constitute a breach of “fundamental rights” under EU and Member States law.
The new regulation sets up a complex market surveillance and control system within the EU, involving the European Commission, the new European Artificial Intelligence Board, and national supervisory authorities, with broad information-gathering powers and powers to order corrective measures and even prohibit infringing AI systems which have not been corrected as ordered. Member States will have the responsibility to lay down their own rules on penalties applicable in the case of infringements of the new regulation, while complying with the rules on application of administrative fines as set out in the new regulation. Applicable to companies, the penalties for infringement are intended to be “effective, proportionate and dissuasive”. The Commission’s proposal for penalties (in the form of administrative fines) on infringing companies is certainly dissuasive:
- Non-compliance with rules on Prohibited AI System practices: fine of up to 6% of worldwide annual revenues (calculated at “undertaking” (group) level, on a consolidated basis).
- Non-compliance with rules which require providers of High-Risk AI System involving the training of models through data to carry out training, validation and testing of data sets that meet specified quality criteria requirements: fine of up to 6% of worldwide annual revenues (at group level).
- Non-compliance with any other requirements or obligations under the Artificial Intelligence Act: up to 4% of total worldwide revenues (at group level).
- The supply of incorrect, incomplete, or misleading information to public authority bodies in reply to a request: up to 2% of total annual revenues (at group level).
- Relationship between the new artificial intelligence regulation and the EU’s General Data Protection Regulation (“GDPR”) of 2018
In some respects, the Artificial Intelligence Act bears many similarities to the GDPR—or at least to its underlying principles. The degree of harm and risk to individuals’ fundamental human rights is a key component for example in the categorization of the various AI systems determinative of the rules applying to each. The European Commission has, however, been keen not to undermine the operation of the GDPR in the context of AI and has included statements in its proposal to ensure consistent use of certain notions that appear in each of the legislative texts such as biometric data, for example. Without a doubt, the most obvious—and arguably significant—similarity with the GDPR is its extraterritorial scope. Like the GDPR, the Artificial Intelligence Act can apply to providers of AI systems outside the EU where the system or “output” of the system is used in the EU.
The impact of the new act looks set to extend beyond geographical borders in terms of application. Its impact is likely to be seen in terms of setting a standard. Much like the GDPR back in 2018, the Artificial Intelligence Act looks set to be held as the highest bar to be reached for AI regulation in jurisdictions around the globe. It remains to be seen how other nations react but the EU’s intentions are clear. In the words of Margarethe Vestager, Executive Vice-President of the European Commission: “By setting the standards we can pave the way to ethical technology worldwide and ensure that the EU remains competitive along the way”.
- European Commission working methods in setting up the new regulatory system – the first mover advantage
The new Artificial Intelligence Act can be seen as a work in progress which represents a first EU legislative step in AI. The draft regulation builds upon these years of intense public discussions during the period 2018-2021, involving policy papers and contributions not only from the European Commission itself (for instance, a discussion paper of February 19, 2020 entitled “White Paper on Artificial Intelligence – A European Approach to Excellence and Trust”), but also other bodies. The AI HLEG issued a number of key deliverables, including papers entitled “Ethics Guidelines for Trustworthy AI”, “The Assessment List for Trustworthy Artificial Intelligence”, and “Policy and Investment Recommendations for Trustworthy AI”. The European Parliament, through its various committees, issued a number of recommendation papers and studies (starting in 2017, in fact). Hundreds of private sector and NGO organizations designed and published AI ethics charts and similar documents, manifestly inspired by EU-generated concepts (“Trustworthy AI”) and approaches. The European Commission created, in 2018, an AI forum having thousands of members, the EU AI Alliance, and in 2019, organized the alliance’s first assembly. Public consultations took place. The end result is that various concepts and approaches emerged, and those are now embodied in a proposed statutory instrument, the EU’s Artificial Intelligence Act. The EU enjoys a first mover advantage, in that, to its knowledge (as stated publicly), no other political entity in the world has proposed a legal system to regulate it. As noted above, it is the same type of first mover advantage that the EU enjoyed through GDPR, which entered into force in mid-2018: the EU established a conceptual and legal framework, which has served as a benchmark for other regulators throughout the world.
- Actual and potential role of companies in the development of the new EU regulatory system
Time and time again, commentators have bemoaned the absence of an AI regulatory framework, the absence of legal security through the guarantee of reference points coming from the public authorities. Over the past three years, the European Commission has laid the framework of an AI legislative system. Now is the time for companies to take stock of where they stand with regard to that system. All information from all sources on AI in the EU context is available to members of the EU AI Alliance and any company can join the AI Alliance through a simple membership procedure (there are currently 4,000 members). The AI Alliance provides a forum for discussion involving the EU institution. As a general observation, many companies are already orienting their own internal organization and external communication around the EU approach. For instance, in April 2021, one of the world’s best-known companies organized a three-week pan-European virtual forum open to all: “Data Science & Law Forum 3.0: Operationalizing Responsible AI”.
- Rest of the world: comparison of the new EU system with the U.S. approach to regulation of artificial intelligence
Comparisons between this new EU system and the U.S. are simple because there is nothing presently to compare. Unlike the EU, the U.S. has not yet developed a comprehensive approach toward AI. Until recently, the focus in the United States has been almost entirely on competitiveness—how to ensure American leadership or, more accurately, that American companies do not fall behind China in the race to develop sophisticated AI applications.
That focus is starting to expand to include consideration of ethical considerations and individual rights. Legislation introduced or under consideration in Congress would require AI systems to address potential bias; mandate accountability, training, and rights of redress; and provide transparency regarding the factors that go into AI decisions. Several states also are considering legislation, raising the possibility of a patchwork of different state rules governing AI, much as is the case with data protection.