The AI Act categorizes AI systems based on their risk levels into four main categories:
The AI Act aims to provide a comprehensive framework for AI development and deployment, fostering innovation while safeguarding public interest. It will also establish a governance structure for effective enforcement and compliance, including national supervisory authorities and an EU-wide AI board.
The legislation is part of a broader European strategy to become a global leader in AI while promoting human-centered and sustainable AI development. The discussions and negotiations regarding the final form of the AI Act are ongoing within the European Parliament and member states.
The Artificial Intelligence Act (AI Act) is a European Union regulation concerning artificial intelligence (AI). It establishes a common regulatory and legal framework for AI within the European Union (EU). It came into force on 1 August 2024, with provisions that shall come into operation gradually over the following 6 to 36 months.
It covers all types of AI across a broad range of sectors, with exceptions for AI systems used solely for military, national security, research and non-professional purposes. As a piece of product regulation, it does not confer rights on individuals, but regulates the providers of AI systems and entities using AI in a professional context.
The Act classifies non-exempt AI applications by their risk of causing harm. There are four levels – unacceptable, high, limited, minimal – plus an additional category for general-purpose AI.
- Applications with unacceptable risks are banned.
- High-risk applications must comply with security, transparency and quality obligations, and undergo conformity assessments.
- Limited-risk applications only have transparency obligations.
- Minimal-risk applications are not regulated.
For general-purpose AI, transparency requirements are imposed, with reduced requirements for open source models, and additional evaluations for high-capability models.
The Act also creates a European Artificial Intelligence Board to promote national cooperation and ensure compliance with the regulation. Like the EU's General Data Protection Regulation, the Act can apply extraterritorially to providers from outside the EU if they have users within the EU.
Proposed by the European Commission on 21 April 2021, it passed the European Parliament on 13 March 2024, and was unanimously approved by the EU Council on 21 May 2024. The draft Act was revised to address the rise in popularity of generative artificial intelligence systems, such as ChatGPT, whose general-purpose capabilities did not fit the main framework.
The AI Act refers to a proposed regulation put forth by the European Commission aimed at establishing a comprehensive framework for artificial intelligence within the European Union. It is part of the EU's broader digital strategy and seeks to ensure that AI systems are safe and respect fundamental rights while fostering innovation.
Key features of the AI Act include:
The AI Act is seen as a significant step towards regulating AI technology and ensuring its ethical use and deployment in society.