What does the EU AI Act entail?

First-ever Regulatory Framework

The EU artificial intelligence act (EU AI Act) is now the first-ever fully developed regulatory framework controlling the development, deployment, and use of artificial intelligence (AI) in the European Union. The EU AI Act was enacted in 2024 with the purpose of balancing the need for innovation against the protection of users' rights, user safety, and transparency through legally binding clear obligations based on the risk of various AI systems.

Works with GDPR

The EU AI Act works in conjunction with other major European data protection laws, such as the General Data Protection Regulation (GDPR), making it an important regulation for all organizations who operate AI systems in the EU or engage users in the EU through AI-enabled services.

The main objectives of the regulation are as follows:

  • Develop harmonized approaches to the development and use of AI.
  • Ensure that AI systems placed in the EU market are safe and ethical.
  • Increase governance and enforcement capabilities to hold accountable for AI.
  • Stimulating the innovation of AI while protecting the fundamental rights of EU citizens.
  • Develop trustworthy AI with a focus on transparency, fairness, and human oversight.

The Act also applies to:

  • AI developers, distributors, and deployers who work in the EU, even if they are not based in the EU.
  • A.I. organizations who work outside the EU but are using AI systems that affect people in the EU.
  • Those who provide high-risk AI systems, including biometric surveillance, credit scoring, recruiting tools, and decision aids in healthcare.
  • It ensures coverage of a wide range of AI applications from basic algorithms to complex deep learning models and it ensures lifecycle governance of AI, covering the lifecycle from training, deployment, monitoring & decommissioning of an AI system.

Obligations for High-Risk AI Systems

The use of high-risk AI will have specific obligations for organizations including:

  • Risk Management System - Conduct continuous risk assessments and mitigation throughout the life cycle of the system.
  • Data Quality - Use training, validation, and testing data that is suitable, representative, and de-biased.
  • Technical Documentation - Keep data-based documentation and technical information that contains the model architecture and nature of the data and accuracy up to date.
  • Audit Log Functionality - Provide automated audit functionality to facilitate full traceability of the system's behavior.
  • Transparency and Explainability - Make understanding the system's operation and limitations easy for the end users.
  • Human Oversight - Use systems that can be overridden by humans, and humans are able to monitor systems.
  • Robustness, Accuracy, and Security - Use models that can operate without failure, are resistant to adversarial attacks, and are not manipulatable.
  • Post-Market Monitoring - Monitor the ongoing performance of the AI system in the real world and report any serious injuries.

Breaching the EU AI Act can lead to massive fines:

  • Up to €35 million or 7% of global annual turnover (whichever is higher), specifically for use of prohibited AI systems.
  • Less (but still significant) fines for not fulfilling high-risk obligations.