The EU AI Regulations​​

The integration of AI continues to surge, and regulators worldwide focus on establishing frameworks that balance innovation with ethical responsibility. 

At the heart of this evolution lies the imperative for transparency in AI decision-making. Ensuring fairness and accountability are a must and not a nice-to-have for any AI system. Navigate the complexities of AI regulations from the EU AI Act to the U.S. AI Bill of Rights.

EU AI Act

Once adopted, it will be the first law worldwide that regulates the development and use of AI in a comprehensive way. The AI Act is set to be the “GDPR for AI,” with hefty penalties for non-compliance, extra-territorial scope, and a broad set of mandatory requirements for organisations which develop and deploy AI.

According to the EU AI Act AI systems are:

AI systems are computer programs created using methods like machine learning, logic, and statistics. They are designed to achieve specific human-defined goals and can produce outputs like content, predictions, recommendations, or decisions that impact the environments they operate in.

A risk-based approach

The AI Act assigns applications of AI to three risk categories based on the potential danger.

Unacceptable risk​

Systems considered a threat to people.

  • Social scoring
  • Manipulation and exploitation of vulnerable groups
  • Spread of fake information
All unacceptable-risk systems will be banned with some exceptions allowed with court approval.

High-risk:

The systems that pose a risk to people’s health, safety, or fundamental rights of individuals.

  • Law enforcement and biometric identification and categorisation of persons
  • Education, employment and access to essential private and public services
  • Management and operations of critical infrastructure
All high-risk AI systems will be assessed before being put on the market and also throughout their lifecycle.

Limited-risk:

The systems that interact or support humans in certain decision-making and can pose a risk of manipulation.

  • Systems that interact with humans
  • Systems used to detect emotions or determine association with categories
  • Systems used to generate or manipulate content (e.g. deepfakes)
Limited-risk AI systems should comply with minimal transparency requirements that would allow users to make informed decisions.

Low-risk:

The systems that pose no or low risk to individuals:

  • Recommendation systems that suggest products/content based on preferences
  • Fraud detection and spam filters
  • AI-enabled video games
Low-risk AI systems have no obligations under the law.

How to comply?

Mandatory requirements for high-risk AI systems

€30 million

Penalties

The Act proposes significant fines of up to 6% of a company’s global turnover or €30 million, whichever is higher, for violations. It also allows for legal action against companies not complying with the regulations.