Summary
The EU AI Act, which came into force on August 1st, 2024, introduces a risk-based legal framework for AI systems, and categorises AI systems into different risk levels with corresponding levels of obligation. Some uses of AI that are banned, for example social scoring by governments or manipulating people subliminally.
Similar to the GDPR, the Act also emphasises transparency and accountability, and highlights the importance of data governance, data quality, and data protection.
This document summarises the key obligations of those subject to the EU Act who provide or deploy AI. Those AI systems and models already on the market may be exempt or have longer compliance deadlines.
Territorial Scope of the AI Act
Territorially, the AI Act applies to:
- deployers that have their place of establishment or who are located within the EU;
- providers – whether or not they are established or located within the EU – if they are placing AI systems on the market / using them in the EU
- those who import or distribute AI systems in the EU
- product manufacturers who place or put into service AI systems in the EU under their own name or trademark
- providers and deployers of AI systems where the output of the AI systems are used in the EU, regardless of where they are established or located
- persons affected by the use of an AI system located in the EU.
Risk-Based Classification
The AI Act introduces a risk-based legal framework based on the specific AI system’s use and the associated risks. It establishes four risk-based types of AI systems:
- Unacceptable risk – prohibited AI systems
- High-risk AI systems
- Limited risk AI systems
- Minimal risk AI systems
The obligations of the AI Act are mainly focused on providers and deployers of AI systems, and a summary of the risk ratings and obligations of providers and deployers when using high, limited/minimal risk and general-purpose AI systems is shown below:
HIGH RISK: For example: critical infrastructure, hiring processes, employee management, educational and vocational training opportunities and outcomes, credit scoring, emotion recognition, prioritisation of emergency first response services, social scoring, risk assessment and pricing of life and health insurance, some areas of law enforcement, migration, asylum/border management, administration of justice and democratic processes.
Providers must:
- register the system in an EU database
- establish, document and maintain appropriate AI risk management system
- have effective data governance required re training, testing and validation – including bias mitigation
- draft and maintain AI system technical documentation
- ensure record keeping, logging and tracing obligations are met
- put in place robust cyber security controls
- ensure the system is fair and transparent
- enable effective human oversight over the system
- in some instances, complete Conformity Assessment before releasing the system
- affix the CE marking to the AI system and include the provider’s contact information on the AI system, packaging or accompanying documentation
Deployers must:
- ensure a competent person with appropriate training, authority and support provides oversight of the system
- ensure that data input is relevant and sufficiently representative to meet the purpose
- inform impacted individuals where a system will make or assist in making decisions about them
- where AI will impact workers, inform them and their representatives that they are subject to a high-risk AI system
- conduct a rights impact assessment – e.g. where credit testing or pricing of life or health insurance is involved
- where AI results in a legal (or similar) effect, provide a clear, comprehensible explanation of the role of the AI system in the decision-making process, along with the main elements of the decision
- develop an ethical AI Code of Conduct
LIMITED OR MINIMAL RISK: for example, AI systems with specific transparency risks or an ability to mislead; AI systems intended to interact directly with individuals; AI systems generating synthetic image, audio, video or text content; emotion recognition systems, biometric categorisation systems; deep fake systems, systems that generate or manipulate public interest information texts made available to the public
Providers and Deployers must:
- Be transparent – users must be aware that they are interacting with a machine
- Ensure that any AI-generated content is detectable as artificially generated and / or manipulated
- Comply with any labelling obligations
- Develop an ethical AI Code of Conduct (recommended)
GENERAL PURPOSE AI SYSTEMS, including AI systems that can perform general functions such as image/speech recognition, audio/visual generation, pattern detection, question answering, translation, chatbots, spam filters, and so on.
Providers must
- draft and maintain AI system technical documentation
- provide information to those who intend to integrate the AI model into their own AI systems
- implement a policy to comply with EU law on copyright and related rights
- document and make public a meaningful summary of the content used for training the general-purpose AI model
Fines
Fines for non-compliance range from €7.5 million to a maximum of €35 million (or 1% to 7% of global annual turnover), depending on the severity of the infringement.
| PENALTY / FINE | COMPLIANCE ISSUE |
| €35 million or 7% of total worldwide annual turnover for the preceding year, whichever is higher | Non-compliance with the rules about prohibited AI systems |
| €15 million or 3% of total worldwide annual turnover for the preceding year, whichever is higher | Non-compliance with most obligations under the AI Act Noncompliance from providers of general-purpose AI models |
| €7.5 million or 1% of total worldwide annual turnover for the preceding year, whichever is higher | For the supply of incorrect, incomplete or misleading information |
If you have any questions or concerns with how you can use AI in compliance with GDPR or other data protection legislation, please call +44 1787 277742 or email dc@datacompliant.co.uk
And please take a look at our services.
Victoria Tuffill – 12th August 2024
