Recitals
- Recital 1: Purpose of the regulation
- Recital 2: Compatibility with the values of the European Union
- Recital 3: Establishing a uniform level of protection for AI in the EU
- Recital 4: Definition of artificial intelligence
- Recital 5: Possible risks associated with the use of AI
- Recital 6: People-centered development in line with EU values
- Recital 7: Level of protection
- Recital 8: Legal framework for AI in the EU internal market
- Recital 9: Harmonized rules for high-risk AI systems in the EU legal framework
- Recital 10: Validity of existing legal bases
- Recital 11: Liability of intermediaries under Directive 2000/31/EC
- Recital 12: Definition and characteristics of AI systems
- Recital 13: Operator
- Recital 14: Biometric data
- Recital 15: Biometric identification
- Recital 16: Biometric categorization
- Recital 17: Biometric remote identification system
- Recital 18: Emotion recognition system
- Recital 19: Publicly accessible space
- Recital 20: Necessary concepts for awareness-raising and AI competence in the EU
- Recital 21: Equal opportunity application of the regulation for AI providers
- Recital 22: Applicability for actors in third countries
- Recital 23: Applicability to EU institutions, authorities and other bodies
- Recital 24: Exceptional areas military, defense and national security
- Recital 25: Exceptional areas research and development
- Recital 26: Risk-based approach
- Recital 27: Principles for AI development
- Recital 28: Harmful or improper use
- Recital 29: Influencing human behavior
- Recital 30: Use of AI for biometric categorization
- Recital 31: Use of AI for social assessment
- Recital 32: Use of AI for real-time remote identification
- Recital 33: Use of biometric real-time remote identification systems for law enforcement purposes
- Recital 34: Proportionate use and fundamental rights impact assessment
- Recital 35: Authorization requirement
- Recital 36: Information of responsible authorities
- Recital 37: Adoption into national law
- Recital 38: Processing of biometric data
- Recital 39: Applicability of Directive (EU) 2016/680
- Recital 40: Binding effect for Ireland
- Recital 41: Binding effect for Denmark
- Recital 42: Presumption of innocence
- Recital 43: Ban on mass surveillance using AI facial recognition
- Recital 44: Risks and limits of emotion recognition
- Recital 45: Further Union law remains unaffected
- Recital 46: Safety requirements for high-risk systems
- Recital 47: Risk minimization
- Recital 48: Protection of fundamental rights
- Recital 49: Adaptation of legislation
- Recital 50: Classification as a high-risk system
- Recital 51: Effects of classification as a high-risk system
- Recital 52: Criteria for classification as a high-risk system
- Recital 53: Exceptions and risks for the risk classification
- Recital 54: Use cases and risk assessment
- Recital 55: Management and operation of critical infrastructure
- Recital 56: Use in the area of education
- Recital 57: Use in the area of employment, personnel management and self-employment
- Recital 58: Use for social benefits, creditworthiness and emergencies
- Recital 59: Use in law enforcement
- Recital 60: Use in the area of migration, asylum and border control
- Recital 61: Use in the administration of justice
- Recital 62: Use in elections
- Recital 63: Legality under other EU legal acts
- Recital 64: Compatibility with other Union law
- Recital 65: Use of risk management system
- Recital 66: Scope of the requirements
- Recital 67: Data governance and data management
- Recital 68: Access to high-quality data
- Recital 69: Right to data protection and privacy
- Recital 70: Right to protection from discrimination
- Recital 71: Development documentation
- Recital 72: Transparency
- Recital 73: Monitoring by natural persons
- Recital 74: Key performance indicator level
- Recital 75: Technical robustness
- Recital 76: Cybersecurity
- Recital 77: Safety requirements according to the regulation
- Recital 78: Conformity assessment procedure
- Recital 79: Responsibility
- Recital 80: Equality and special protection
- Recital 81: Quality management system
- Recital 82: Level playing field
- Recital 83: Roles and duties of the players
- Recital 84: Validity as a provider of systems under certain conditions
- Recital 85: Interaction and cooperation between providers of general-purpose high-risk AI systems
- Recital 86: Cooperation of former providers
- Recital 87: AI system as a safety component
- Recital 88: Provision of information
- Recital 89: Release from responsibilities
- Recital 90: Model contract terms
- Recital 91: Responsibilities and obligations for operators
- Recital 92: Duty to inform and consult employees and representatives
- Recital 93: Informing affected parties
- Recital 94: Application of Directive (EU) 2016/680 for the processing of biometric data for law enforcement purposes
- Recital 95: Protection regulations for subsequent remote biometric identification
- Recital 96: Fundamental rights impact assessment
- Recital 97: Definition of individual AI models
- Recital 98: Model determination
- Recital 99: Generative AI models as a general purpose model
- Recital 100: Examples of AI model with general purpose
- Recital 101: Transparency obligations for providers
- Recital 102: Transparency obligations for open source licenses
- Recital 103: Definition of an AI model with general purpose and open source license
- Recital 104: Exemptions from the transparency obligation
- Recital 105: Use of protected content during development
- Recital 106: Warranty for the fulfillment of obligations by providers
- Recital 107: Transparency obligations for training data
- Recital 108: Compliance control
- Recital 109: Proportionality of surveillance measures
- Recital 110: Systemic risks for general-purpose AI models
- Recital 111: Classification of AI models with systemic risk
- Recital 112: Clarification of the procedure for classification
- Recital 113: Power of the Commission to classify
- Recital 114: Risk assessment obligation for providers
- Recital 115: Risk mitigation obligation
- Recital 116: Codes of conduct for AI models with general purpose and systemic risks
- Recital 117: Uniform codes of conduct by the EU Commission
- Recital 118: Regulation of AI systems in the Union by regulation
- Recital 119: AI models as intermediary services
- Recital 120: Importance of obligations for providers and operators
- Recital 121: Harmonized standards
- Recital 122: Compliance with cyber security requirements
- Recital 123: Conformity assessment of high-risk systems
- Recital 124: Assessment of compliance with the requirements of the regulation
- Recital 125: Limited conformity assessment by third parties
- Recital 126: Procedure for conformity assessment by third parties
- Recital 127: Recognition of conformity assessment results
- Recital 128: Renewed conformity assessment in the event of significant changes
- Recital 129: CE marking for high-risk systems
- Recital 130: Specific grounds for placing on the market without conformity assessment
- Recital 131: Obligation to register high-risk models
- Recital 132: Transparency obligations where there is a particular risk of fraud and identity theft
- Recital 133: Mandatory labeling of AI-generated results
- Recital 134: Obligation to disclose AI-generated content
- Recital 135: Codes of conduct for AI content review
- Recital 136: Importance of transparency for the implementation of the regulation
- Recital 137: Transparency and legality
- Recital 138: Introduction of real laboratories
- Recital 139: Goals of real laboratories
- Recital 140: Legal basis for the use of personal data in the public interest
- Recital 141: Testing AI systems under real conditions
- Recital 142: Encouraging support and promotion by member states
- Recital 143: Consideration of SMEs, including start-ups
- Recital 144: Promoting and protecting innovation
- Recital 145: Minimizing risk and facilitating compliance with obligations
- Recital 146: Relief for micro-enterprises
- Recital 147: Facilitating access for test and experimental facilities
- Recital 148: Introduction of a governance framework
- Recital 149: Establishment of a committee
- Recital 150: Advisory forum
- Recital 151: Scientific committee
- Recital 152: Establishment of Union structures to support AI systems
- Recital 153: Designation of at least one notifying authority and one monitoring authority per member state
- Recital 154: Powers of competent national authorities
- Recital 155: System for monitoring high-risk models and reporting serious incidents
- Recital 156: Validity of the system for market surveillance and conformity of products
- Recital 157: Position of national authorities or other bodies
- Recital 158: Supervision and market surveillance of AI systems in the financial sector
- Recital 159: Powers of competent authorities for biometric data
- Recital 160: Joint activities of market surveillance authorities
- Recital 161: Responsibilities and competencies
- Recital 162: Synergies at Union level
- Recital 163: Scientific panel as supporter of the Office for AI
- Recital 164: Competencies of the AI Office
- Recital 165: Voluntary codes of conduct for non-high-risk AI systems
- Recital 166: Safety of non-high-risk AI systems
- Recital 167: Cooperation between competent authorities
- Recital 168: Implementation of provisions
- Recital 169: Fines
- Recital 170: Legal remedies
- Recital 171: Declaration on the decision of high-risk systems
- Recital 172: Whistleblower protection
- Recital 173: Powers of the Commission after delegation
- Recital 174: Assessment and review
- Recital 175: Delegation of executive powers
- Recital 176: Principle of proportionality and subsidiarity
- Recital 177: Ensuring continuity
- Recital 178: Voluntary compliance during the transition phase
- Recital 179: Date of validity
- Recital 180: Decree
Do you have questions?
We are happy to support you with your AI project!