The Artificial Intelligence Act: The Council and the Parliament reached a deal on the legal text

Background information

The Commission's proposal for a Regulation laying down harmonised rules on artificial intelligence (the Artificial Intelligence Act) follows a risk-based approach and lays down a uniform, horizontal legal framework for AI that aims to ensure legal certainty.  The draft regulation aims to promote investment and innovation in AI, enhance governance and effective enforcement of existing law on fundamental rights and safety, and facilitate the development of a single market for AI applications. In the end of 2022, the Council reached an agreement for a general approach (negotiating mandate) and entered interinstitutional talks with the European Parliament (so called ‘trilogues’) in mid-2023.

On 8 December 2023 the European legislators reached an agreement on the groundbreaking European law on the use of artificial intelligence (AI), the so-called Artificial Intelligence Act. The upcoming regulation will be directly applicable in Member States (and the wider European Economic Area). The regulation is based on the regulation of AI depending on its capacity to cause harm to society evaluated under a ‘risk-based’ approach. Higher-risk AI systems will be placed under stricter rules. The ambition of the EU is to set a global standard for AI regulation.

The provisional agreement on the text includes a revised definition of an AI system with clear criteria for distinguishing AI from simpler software systems. The regulation will not apply to areas such as national security and systems used exclusively for military, research, and innovation or non-professional reasons.

AI systems presenting limited risk will be subject to very light transparency obligations directed to users. A wide range of high-risk AI systems would be authorized subject to requirements that are more technically feasible and less burdensome. The revised text of the draft regulation clarifies the allocation and responsibilities of the various actors in the value chain. It also clarifies the relationship between responsibilities under the AI Act and responsibilities that already exist under other legislation, such as the relevant EU data protection or sectorial legislation. Systems with unacceptable risk will be banned for example cognitive behavioural manipulation, the untargeted scrapping of facial images from the internet or CCTV footage, emotion recognition in the workplace and educational institutions, social scoring, biometric categorisation to infer sensitive data, such as sexual orientation or religious beliefs, and some cases of predictive policing for individuals.

An emergency procedure was introduced allowing law enforcement agencies to deploy a high-risk AI tool that has not passed the conformity assessment procedure in case of urgency. A specific mechanism has been also introduced to ensure that fundamental rights will be sufficiently protected against any potential misuses of AI systems. Real-time remote biometric identification systems in publicly accessible spaces can be exceptionally used if strictly necessary for law enforcement purposes (in cases of victims of certain crimes, prevention of terrorist attacks, and searches for people suspected of the most serious crimes).

New provisions have been added to take into account situations where AI systems can be used for general-purpose AI, and where general-purpose AI technology is subsequently integrated into another high-risk system. The provisional agreement also addresses the specific cases of general-purpose AI (GPAI) systems. Specific rules have been also agreed upon for foundation models, large systems capable to competently perform a wide range of distinctive tasks, such as generating video, text, images, conversing in lateral language, computing, or generating computer code. The provisional agreement provides that foundation models must comply with specific transparency obligations before they are placed in the market. A stricter regime was introduced for ‘high impact’ foundation models (trained with large amounts of data and with advanced complexity, capabilities, and performance well above the average, which can disseminate systemic risks along the value chain).

An AI Office within the Commission will be set up to oversee the most advanced AI models, contribute to standardisation and testing, and enforcement the common rules in all member states. A scientific panel of independent experts will advise the AI Office about GPAI models. The AI Board composed of member states’ representatives will act a coordination platform and an advisory body to the Commission. An advisory forum for industry representatives, SMEs, start-ups, civil society, and academia, will be set up to provide technical expertise to the AI Board.

The fines for violations of the AI Act will be set as a percentage of global annual turnover or a predetermined amount, whichever is higher. This would be €35 million or 7% for violations of the banned AI applications, €15 million or 3% for violations of the AI act’s obligations, and €7,5 million or 1,5% for the supply of incorrect information. Proportionate caps on administrative fines are introduced for SMEs and start-ups. A natural or legal person may make a complaint to the relevant market surveillance authority which should be dealt with under a dedicated procedure of that authority.

A fundamental rights impact assessment should be carried out before a high-risk AI system is put on the market and increased transparency regarding the use of high-risk AI systems. Certain users of high-risk AI systems that are public entities will also be obliged to register in the EU database for high-risk AI systems.  Users of an emotion recognition system will be obliged to inform natural persons when they are being exposed to such a system.

The measures in support of innovation have been modified: AI regulatory sandboxes (controlled environment for the development, testing, and validation of innovative AI systems) should also allow for testing of innovative AI systems in real-world conditions. New provisions have been added allowing testing of AI systems in real-world conditions, under specific conditions and safeguards. A list of actions will have to be undertaken to support smaller operators and provides for some limited and clearly specified derogations. 

What comes next? Following the provisional agreement between the institutions, the work will continue at the technical level in the coming weeks to finalise the details of the new regulation. The entire text will need to be confirmed by both institutions and undergo legal-linguistic revision before formal adoption by the co-legislators. The provisional agreement provides that the AI Act should apply two years after it enters into force, with some exceptions for specific provisions.

The authors are DGKV's Partner Violetta Kunze and Senior Associate Georgi Sulev.