AI Act
What is AI Act?
The AI Act is a legislative proposal put forth by the European Commission aimed at regulating artificial intelligence within the European Union. Its main objectives are to ensure that AI systems are safe, respect fundamental rights, and promote trust in AI technologies.The AI Act categorizes AI systems based on their risk levels into four main categories:
- Unacceptable Risk: AI systems that pose a clear threat to safety or fundamental rights. These will be banned.
- High Risk: AI systems that significantly affect people's lives, such as in critical infrastructure, education, and employment. These systems will be subject to strict requirements related to transparency, accountability, and oversight.
- Limited Risk: AI systems that have specific transparency obligations. Users must be informed that they are interacting with an AI system.
- Minimal Risk: AI systems with minimal or no risk will have no specific legal requirements.
The AI Act aims to provide a comprehensive framework for AI development and deployment, fostering innovation while safeguarding public interest. It will also establish a governance structure for effective enforcement and compliance, including national supervisory authorities and an EU-wide AI board.
The legislation is part of a broader European strategy to become a global leader in AI while promoting human-centered and sustainable AI development. The discussions and negotiations regarding the final form of the AI Act are ongoing within the European Parliament and member states.
- Snippet from Wikipedia: Artificial Intelligence Act
The Artificial Intelligence Act (AI Act) is a European Union regulation concerning artificial intelligence (AI). It establishes a common regulatory and legal framework for AI within the European Union (EU). It came into force on 1 August 2024, with provisions that shall come into operation gradually over the following 6 to 36 months.
It covers all types of AI across a broad range of sectors, with exceptions for AI systems used solely for military, national security, research and non-professional purposes. As a piece of product regulation, it does not confer rights on individuals, but regulates the providers of AI systems and entities using AI in a professional context.
The Act classifies non-exempt AI applications by their risk of causing harm. There are four levels – unacceptable, high, limited, minimal – plus an additional category for general-purpose AI.
- Applications with unacceptable risks are banned.
- High-risk applications must comply with security, transparency and quality obligations, and undergo conformity assessments.
- Limited-risk applications only have transparency obligations.
- Minimal-risk applications are not regulated.
For general-purpose AI, transparency requirements are imposed, with reduced requirements for open source models, and additional evaluations for high-capability models.
The Act also creates a European Artificial Intelligence Board to promote national cooperation and ensure compliance with the regulation. Like the EU's General Data Protection Regulation, the Act can apply extraterritorially to providers from outside the EU if they have users within the EU.
Proposed by the European Commission on 21 April 2021, it passed the European Parliament on 13 March 2024, and was unanimously approved by the EU Council on 21 May 2024. The draft Act was revised to address the rise in popularity of generative artificial intelligence systems, such as ChatGPT, whose general-purpose capabilities did not fit the main framework.
The AI Act refers to a proposed regulation put forth by the European Commission aimed at establishing a comprehensive framework for artificial intelligence within the European Union. It is part of the EU's broader digital strategy and seeks to ensure that AI systems are safe and respect fundamental rights while fostering innovation.
Key features of the AI Act include:
- Risk-based classification of AI systems, categorizing them into:
- High-risk
- Limited-risk
- Minimal-risk
- Requirements for high-risk AI systems, including:
- Strict quality management systems
- Robust data governance and management
- Transparency and documentation obligations
- Provisions for enforcement and penalties for non-compliance.
- Encouragement of collaboration and innovation in the AI sector.
The AI Act is seen as a significant step towards regulating AI technology and ensuring its ethical use and deployment in society.
Related:
External links:
- AI Act | Shaping Europe’s digital future — digital-strategy.ec.europa.eu
- EU AI Act: first regulation on artificial intelligence | Topics | European Parliament — europarl.europa.eu
- The use of artificial intelligence in the EU will be regulated by the AI Act, the world’s first comprehensive AI law. Find out how it will protect you.
- EU Artificial Intelligence Act | Up-to-date developments and analyses of the EU AI Act — artificialintelligenceact.eu
- CEO's Guide to EU AI Act - Is Your AI Ready? —holisticai.com
- Artificial intelligence is undoubtedly becoming an integral part of global business, bringing transformative opportunities but also significant challenges and responsibilities.
Search this topic on ...
Related Articles