On March 13, 2024, the European Parliament approved a proposal for the Artificial Intelligence Act, a long-awaited, first-of-its-kind legal framework governing artificial intelligence (AI). As the development and use of AI and large language model (LLM) technologies continue to surge, the European Parliament states that the act aims to “protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while boosting innovation. . . .” The act will enter into force 20 days after its publication in the Official Journal. This alert serves as a guide to help businesses prepare for the legal effectiveness of the act with provisions identical to the proposal.
Scope
The scope of the act applies to the use of AI systems (i.e., products and services that are powered by AI), and the act is organized around uses that are likely to produce the highest risks. An AI system is defined as a machine-based system designed to operate with varying levels of autonomy that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers (from the input it receives) how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Similar to the applicability of the European General Data Protection Regulation (GDPR), the act applies to all providers and deployers of AI systems marketed or used within countries of the European Union (EU), whether the entities are based within the EU or not. Therefore, the United States and other non-European businesses are subject to the act’s requirements and face exposure to penalties for noncompliance.
AI System Categorization
The act creates a risk-based approach by categorizing AI systems based on their capacity to harm society. As the risks increase, the more stringent the rules and obligations concerning AI use will become. As illustrated in the triangle below, companies must contemplate which risk category their AI systems fall into to determine which obligations apply to their operations.
Source: European Commission
- Prohibited Practices. The act expressly prohibits AI practices that pose unacceptable risks. These prohibited practices violate fundamental rights and values and are a clear threat to people’s safety, livelihoods, and rights. These technologies must be removed from the market within six months after the act goes into force. Below we have provided a list of the more notable prohibitions.
- High Risk. AI systems categorized as high risk encompass various sectors, including critical infrastructures, educational or vocational training, safety components of products, employment, management of workers, essential private and public services, law enforcement, migration, border control, and the administration of justice and democratic processes. These technologies must comply with strict obligations before being put on the market, such as risk assessment; mitigation systems; use of high-quality datasets to minimize risks and discriminatory outcomes; activity logging; detailed documentation; human oversight; and high robustness, security, and accuracy.
- Limited Risk. Limited-risk AI systems pertain to risks associated with the lack of transparency in AI usage. The act provides specific transparency obligations to ensure users are informed when interacting with AI systems so they can decide whether to continue. AI system providers must ensure that AI-generated content, including audio and video content constituting deep fakes, is identifiable and labeled as artificially generated.
- Minimal or no risk. Under the act, minimal or no-risk AI systems, such as AI-enabled video game filters or spam filters, are freely usable.
Deeper Look
The act provides numerous requirements and exceptions. The following is a summary of provisions that we feel are particularly noteworthy:
- Notable Prohibitions. The act provides, among other prohibitions, those that involve or relate to:
- appreciably impairing a person’s ability to make an informed decision, thereby causing the person to make a decision that the person would not have otherwise made in a manner that causes or is likely to cause that person, another person, or group of persons significant harm;
- exploiting any of the vulnerabilities of a person or a specific group of persons;
- evaluating or classifying persons or groups of persons over a certain period of time based on their social behavior with detrimental or unfavorable treatment;
- making risk assessments of persons in order to assess or predict the likelihood of a person committing a criminal offense, based solely on the profiling of a person or on assessing the person’s personality traits and characteristics;
- using facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage; or
- using biometric categorization systems that categorize persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation.
- Notable Duties for the Use of High-Risk AI Systems. The act imposes, among other requirements for the use of high-risk AI systems, a duty to:
- create a risk management system;
- implement appropriate measures to detect, prevent, and mitigate possible biases;
- create technical documentation in such a way as to demonstrate that the high-risk AI system complies with the act’s requirements and to provide national competent authorities and notified bodies with the necessary information in a clear and comprehensive form to assess the compliance of the AI system with those requirements;
- implement or design the high-risk AI system in a way that technically allows for the automatic recording of events (“logs”) over their lifetime;
- provide instructions for use in an appropriate digital format or otherwise that include concise, complete, correct, and clear information that is relevant, accessible, and comprehensible to deployers;
- design the high-risk AI system for human oversight commensurate to the risks, level of autonomy, and context of use of the high-risk AI system;
- implement technical redundancy solutions, which may include backup or fail-safe plans;
- implement measures to prevent, detect, respond to, resolve, and control for attacks trying to manipulate the training data set (“data poisoning”) or pre-trained components used in training (“model poisoning”), inputs designed to cause the AI model to make a mistake (“adversarial examples” or “model evasion”), confidentiality attacks, or model flaws;
- apply the European CE marking to the high-risk AI system or, where that is not possible, on its packaging or its accompanying documentation, to indicate conformity with the act;
- implement procedures related to the reporting of a serious incident;
- retain records and documentation for a period ending 10 years after the high-risk AI system has been placed on the market or put into service;
- appoint an authorized EU representative;
- inform persons that they are subject to the use of the type of high-risk AI system specified in Annex III of the act that involves making decisions or assisting in making decisions related to the persons;
- perform a conformity assessment;
- register the high-risk AI system in the EU database;
- immediately report any serious incident to the market surveillance authorities of the EU countries where that incident occurred within time limits that range from two to 15 days, depending upon the type of incident;
- provide market surveillance authorities (national authorities of the EU countries) access to the source code of the high-risk AI system, subject to certain confidentiality obligations of the authorities; and
- in response to requests from persons affected by a decision made on the basis of the output of the high-risk AI system, provide clear and meaningful explanations of the role of the AI system in the decision-making procedure and the main elements of the decision made.
- Notices. The act provides, among other notice requirements for the use of AI systems, those that require or involve a duty to:
- ensure that the AI systems intended to interact directly with persons are designed and developed in such a way that the persons concerned are informed that they are interacting with an AI system, unless this is obvious from the point of view of a person who is reasonably well informed, observant, and circumspect, taking into account the circumstances and the context of use;
- with respect to outputs of AI systems that generate synthetic audio, image, video, or text content, mark the output in a machine-readable format and detectable as artificially generated or manipulated;
- with respect to an emotion recognition system or a biometric categorization system, inform the persons exposed thereto of the operation of the system;
- with respect to an AI system that generates or manipulates image, audio, or video content constituting a deep fake, disclose that the content has been artificially generated or manipulated;
- with respect to an AI system that generates or manipulates text that is published with the purpose of informing the public on matters of public interest, disclose that the text has been artificially generated or manipulated.
- Sandbox Resources. Providers and prospective providers may participate in the EU’s AI regulatory sandboxes. These sandboxes facilitate testing, including the testing of high-risk AI systems in real-world conditions with informed consent from the subjects prior to their participation.
- Penalties. The act sets forth, among other regulatory penalties, administrative fines for violation of the prohibitions summarized above. The upper limit of these fines is 35 million euros or, if the offender is a business, up to seven percent of its total worldwide annual turnover for the preceding financial year, whichever is higher.
Looking Ahead
The European Union Artificial Intelligence Act marks a pivotal moment in regulating AI and will affect how AI technologies are developed and deployed globally. Full applicability of the act will occur two years after it comes into force with exceptions: prohibitions will take effect after six months, governance rules and obligations for general-purpose AI models will become applicable after 12 months, and regulations for AI systems embedded into regulated products will apply after 36 months. Businesses should evaluate the risk level of AI systems in use or development, used or marketed in the EU, and assess the optional obligations.
For more information about the European Union Artificial Intelligence Act and how it may affect your operations as a provider, developer, or implementer of AI systems, please contact Renato Smith, Data Security & Technology Practice Area co-chair, at rsmith@barclaydamon.com; Celine Dorsainvil, associate, at cdorsainvil@barclaydamon.com; or another member of the firm’s Data Security & Technology Practice Area.