In April 2021, the European Commission published a proposal for a regulation on artificial intelligence, currently well-known as the Artificial Intelligence Act (“AIA”). The AIA is the first legislation of its kind with global impact to enact horizontal rules for the development, placement on the market and use of AI systems.
What?
Artificial intelligence (“AI”) technology is a central enabler of digital transformation, as it brings a wide array of economic and societal advantages to a wide range of sectors (both private and public). For example, AI enables automation, smart decision-making, forecasting, enhanced customer experience and efficient research.
The AIA is part of the broader European AI strategy and package, which support the objective of the EU being a global leader in the development of secure, trustworthy and ethical AI. It imposes obligations on various actors involved in the development, placement on the market, putting into service and use of AI systems, dependent on the extent and intensity of the risk posed by the AI system.
Obligations?
- The AIA takes a risk-based approach and introduces a classification of various AI systems. Different risk levels (minimal, limited, high and unacceptable risk) come with different requirements and obligations.
For example, high-risk AI systems can only be placed on the EU market or put into service if they comply with several strict mandatory requirements (e.g. conformity assessment procedure, quality management system and technical documentation). In addition, such high-risk AI systems may only be used under certain conditions (e.g. using the AI system in accordance with instructions for use or conducting a fundamental rights impact assessment). For example, a CV-screening tool and an AI-based credit application assessment system will be considered high-risk AI systems.
- The AIA bans several AI systems that present an unacceptable risk, such as (i) certain AI-based social scoring systems, and (ii) real-time remote biometric identification systems in publicly accessible spaces.
- AI systems intended to interact with natural persons, emotion recognition systems and biometric categorisation systems, and AI systems used to generate or manipulate image, audio or video content (e.g. deep fakes) will be subject to harmonised transparency rules.
- In addition, transparency and other specific obligations also apply in the context of generative AI (e.g. ChatGPT and DALL-E) and foundation models (e.g. GPT-4).
- The AIA provides for the creation of a new European Artificial Intelligence Office (“Office”), composed of, among others, representative(s) from each national supervisory authority, the European Data Protection Supervisor, the European Commission, ENISA, FRA, etc. The Office will have the power to issue opinions, recommendations and guidelines regarding the implementation of the AIA and will help ensure effective cooperation between the national supervisory authorities and the Commission. Similar to national supervisory authorities under the GDPR, the national AI authorities will supervise the application and implementation of the AIA. Spain is already one step ahead of Belgium, as it has already decided to establish such a supervisory AI authority.
Who?
The AIA has a broad scope of application and will apply to:
- providers: developing an AI system or having an AI system developed with a view to placing it on the market or putting it into service under their own name or trade mark, whether for payment or free of charge;
- users/deployers: using an AI system under itheir authority, except where the AI system is used in the course of a personal non-professional activity; and
- authorised representatives: having received a written mandate from a provider of an AI system to perform and carry out on its behalf the obligations and procedures established by the AIA.
In addition, the AIA also imposes certain obligations on importers and distributors of AI systems, as well as product manufacturers.
Similarly to the General Data Protection Regulation, the AIA will have an extraterritorial effect. A “Brussels effect” is therefore to be expected, and already seems to be setting in, given the recent executive order issued by President Biden and the Bletchley Declaration by the countries attending the AI Safety Summit in the UK in November 2023.
Sanctions?
Non-compliance with the AIA can result in various sanctions. For example, a national supervisory authority can take corrective measures (e.g. withdrawal of the AI system from the market). In addition, Member States must also set administrative fines within the thresholds set out in the AIA (up to EUR 40 million or 7% of annual global turnover). Finally, Member States must also set and enforce (effective, proportionate and dissuasive) penalties.
When?
A political agreement on the AIA was reached on 8 December 2023 during the fifth round of trilogue between the European Commission, the Council and the European Parliament. However, the draft AIA still needs to be formally adopted. Be aware that most provisions of the AIA will only enter into force after a period of 24 months (except for e.g. governance provisions).
What action do you need to take?
Considering, inter alia, (i) the number and extent of AIA obligations, and (ii) the large administrative fines in case of non-compliance with the AIA, we recommend that you take immediate action to ensure you are operating in accordance with the AIA. For example, make an inventory of your AI systems with corresponding risk assessments, draft the necessary policies, implement all requirements of the AIA when (further) developing your AI systems, etc.
Our team is ready to assist you with a multidisciplinary approach. We can advise and assist you in making the necessary assessments and drafting documentation.