Article 1: Subject matter
Chapter 1 General Provisions
(1)
The purpose of this Regulation is to improve the functioning of the internal market and promote the uptake of human-centric and trustworthy artificial intelligence (AI), while ensuring a high level of protection of health, safety, fundamental rights enshrined in the Charter, including democracy, the rule of law and environmental protection, against the harmful effects of AI systems in the Union and supporting innovation.
(2)
This Regulation lays down:
(a) harmonised rules for the placing on the market, the putting into service, and the use of AI systems in the Union;
(b) prohibitions of certain AI practices;
(c) specific requirements for high-risk AI systems and obligations for operators of such systems;
(d) harmonised transparency rules for certain AI systems;
(e) harmonised rules for the placing on the market of general-purpose AI models;
(f) rules on market monitoring, market surveillance, governance and enforcement;
(g) measures to support innovation, with a particular focus on SMEs, including start-ups.
(a) harmonised rules for the placing on the market, the putting into service, and the use of AI systems in the Union;
(b) prohibitions of certain AI practices;
(c) specific requirements for high-risk AI systems and obligations for operators of such systems;
(d) harmonised transparency rules for certain AI systems;
(e) harmonised rules for the placing on the market of general-purpose AI models;
(f) rules on market monitoring, market surveillance, governance and enforcement;
(g) measures to support innovation, with a particular focus on SMEs, including start-ups.
Related Recitals
- Recital 1: Purpose of the regulation
- Recital 2: Compatibility with the values of the European Union
- Recital 3: Establishing a uniform level of protection for AI in the EU
- Recital 8: Legal framework for AI in the EU internal market
- Recital 26: Risk-based approach
- Recital 27: Principles for AI development
- Recital 59: Use in law enforcement
- Recital 155: System for monitoring high-risk models and reporting serious incidents
Do you have questions?
We are happy to support you with your AI project!