8 Essential Steps for EU AI Act Compliance

The EU AI Act, which entered into force on August 1, 2024, is the world’s first comprehensive legal framework on AI. The new law establishes a new standard for AI governance not only in the EU, but around the world—including the US.

“The aim is to turn the EU into a global hub for trustworthy AI,” according to EU officials.

Global Impact Beyond EU 

The EU AI Act, as the first global AI regulatory framework, may set the AI standards for other jurisdictions, as GDPR has done for information privacy resulting into a so-called ‘Brussels Effect’.

Initiatives like ‘AI Bill of Rights’, National Artificial Intelligence Initiative in US, New Generation Artificial Intelligence Development Plan of China, AI Governance Framework of UK reflect diverse approaches to addressing ethical risks and promoting economic growth through AI.

These global initiatives, demonstrate the increasing importance of responsible AI governance and the interconnected nature of worldwide regulatory control. IDC predicts that by 2028, 60% of governments worldwide will adopt a risk management approach in framing their AI and generative AI policies (IDC FutureScape: Worldwide National Government 2024 Predictions).

When does the Act go into Effect?

Most provisions will apply starting 2 August 2026.

However, the rules relating to AI literacy and prohibitions on certain AI systems will apply from 2 February 2025, while the requirements for General-Purpose AI (GPAI) models will apply from 2 August 2026.

Classification rules for AI systems

The Act adopts a risk-based approach for categorising AI systems into different tiers.

High riskThis includes AI systems used in critical infrastructure, education and vocational training, employment, law enforcement and migration, asylum, and border control management.

TierSummary
UnacceptableriskThese systems are prohibited and include those that deploy manipulative techniques to distort human behaviour, causing significant harm.
High riskThis includes AI systems used in critical infrastructure, education and vocational training, employment, law enforcement and migration, asylum, and border control management.
Limited riskThis include deepfakes and chatbots. Developers and deployers of these AI systems are required to make end-users aware that they are interacting with AI.
Minimal riskThese AI systems are unregulated and include spam filters and AI-enabled video games (however, this is changing with generative AI).

Key Action Items for Compliance

Here are 8 steps you can take to jump start your compliance efforts:

1. Identify the team responsible for AI governance and compliance

Foster cross-functional collaboration amongst legal, product, engineering, and operational teams to ensure AI regulatory requirements are effectively addressed throughout the development process.

2. Inventory the organization’s AI

In assessing the Act’s applicability and corresponding obligations, organizations should inventory AI they are developing, using, distributing, or selling and document:

  •  Whether the AI is internally developed or provided by a third party.
  • Where applicable, the party providing the AI or developing the AI on behalf of the organization (along with any applicable contractual terms).
  • What the AI is designed to do, and in which settings and jurisdictions the AI will be used.
  • What type of content the AI uses as input to run, and what the AI produces as output when it finishes running.

Organizations should consider adopting a department-by-department approach to the inventory process, as this will often help identify less visible AI use cases.

3. Classify AI Systems and Assess AI Act applicability

After the organization’s AI inventory is completed, use it to determine whether the organization’s AI qualifies as an in-scope technology and, if it does:

  • Whether the company is acting as a provider, importer, distributor or deployer of the AI; and
  • Whether the AI qualifies as Prohibited AI, High-Risk AI, General-Purpose AI, or otherwise involves direct interaction with individuals or exposes individuals to AI-generated content.

For Prohibited AI

  • Either discontinue its use in the EU or alter its use to qualify for an exception or fall outside of the Prohibited AI designation.

For High-Risk AI:

    • Determine when the system should be placed on the market or put into service in the EU (as the AI Act’s High-Risk AI System obligations trigger on different dates depending on these events).

For General-Purpose AI Models:

    •  Prepare technical documentation that can be leveraged for the Act’s documentation requirements.
    • Understand how choices today may impact the information included in such documentation (including data used to train the model and risk assessments performed in connection with the model)
    •  Determine when the model should be placed on the market in the EU
    • For AI systems interacting directly with individuals or exposing individuals to AI-generated content, prepare product design elements to address applicable transparency obligations

4. Develop an AI governance framework/policies

To successfully navigate the evolving AI legal environment, organizations should develop and implement internal policies and procedures to align their use and development of AI with their mission, risk profile, legal obligations, and ethical priorities.

Most organizations start by developing an AI Responsible Use Policy to guide their initial AI-related compliance efforts.

5. Risk Management System

Develop a comprehensive risk management system tailored to your AI products. This should include regular risk assessments, mitigation strategies, and continuous monitoring to identify and address potential risks promptly.

6. Ethical AI Design

Prioritize ethical considerations in AI design. Implement frameworks to minimize biases, ensure fairness, and promote inclusivity in AI-driven decisions and outputs.

7. Enhancing Transparency & Employee Training

Ensure clear communication about AI system usage to all stakeholders.

Implement training and competency requirements for human oversight to ensure personnel can effectively monitor AI systems. Maintain transparency in AI operations by providing clear instructions and information about AI capabilities and limitations.

Clear Communication & Documentation:

  • Inform users they are interacting with an AI system.
  • Provide detailed instructions and capabilities.

Training Requirements:

  • Regular training sessions for staff regarding regulatory requirements and AI ethics and compliance

Operational Transparency:

  • Regular updates and reports on AI system performance.
  • Open channels for customers’ feedback on AI output and concerns.
  • Automated testing and validation processes to regularly check for compliance issues. This includes validating data quality, model accuracy, and alignment with ethical standards.

8. Robust Security Measures

Implement advanced security measures to protect your AI systems from cyber threats. This includes securing data, protecting AI models from tampering, and ensuring system integrity. Incorporate Security by Design principle from the initial stages of product development.

Conclusion

By taking the steps outlined above, organisations can not only ensure compliance with the AI Act but also build a foundation for the responsible and ethical use of AI. Such a proactive approach will help mitigate risk, increase stakeholder trust, and position the organisation for a sustainable, compliant, and successful business in the evolving AI landscape.

Cytrio is helping organizations comply with EU AI Act by providing Responsible AI Use Policy that you can incorporate into your EU AI Act compliance program. Sign-up for Cytrio and access out-of-the-box Responsible AI Use Policy.