AILab Howest

Howest Logo

/

AI Act FAQ

The AI Act, taking effect on August 1 2024, marks a significant milestone in the regulation of artificial intelligence within the European Union. This legislation aims to ensure that AI technologies are developed and deployed in a manner that prioritizes safety, transparency, and accountability. By establishing comprehensive guidelines and standards, the AI Act seeks to mitigate potential risks associated with AI, such as discrimination and privacy breaches, while fostering innovation and public trust. Its implementation represents a proactive step towards harnessing the benefits of AI while safeguarding fundamental rights and societal values.

It is important to comply to the AI Act, when using or providing AI in the EU. But for some it is still not clear what the AI Act is all about. Below, one can find some frequently asked questions about the AI Act with a short and clear answer.

Cover image

Quick facts

  • /

    The AI Act was published in the Official Journal on 12 July, 2024

  • /

    The AI Act takes officially effect on 1 August, 2024

  • /

    The risk level depends on the use case, not the model or the sector of the company

Why is there an AI Act?

The AI Act was written for numerous reasons:

  • To regulate the highest risks of AI (e.g. discrimination, privacy issues, emotional manipulation, desinformation, …);
  • To enforce that all AI systems are secure and comply to the existing laws;
  • To let humans be the focus again;
  • To provide a framework for companies to accelerate their innovation;
  • To prevent market fragmentation.

But the AI Act also enforces us to think about what we can automate with AI and what we should not automate with AI, because AI is not always useful or desirable.

There are a number of risks associated with the use of AI. But not all risks are direct.

Some examples of direct risks of AI:

  • Discrimination & bias
  • Privacy issues
  • Physical damage
  • Emotional manipulation
  • Desinformation and propaganda

Some examples of indirect risks of AI:

  • Influence on the job market & the economy
  • Deteriorated writing & critical thinking skills
  • Algorithmization of society

Who should take the AI ​​Act into account?

Anyone who develops, uses, distributes, imports, represents, or markets AI tools under his or her own trademark in or into the EU must comply with the AI ​​Act.

Roles defined in the AI Act:

  • Provider - develops an AI tool
  • Deployer - uses an AI tool
  • Distributor - distributes an AI tool
  • Importer - imports an AI tool from outside the EU to the EU
  • All third parties:
    • Authorised representative - represents an AI tool or a provider of an AI tool
    • Product manufacturer - brings an AI tool on the EU market under their own name or trademark

When will the AI Act come into effect?

As soon as the AI Act is officially published - which will probably be mid-July - one will have 20 more days to prepare until the AI Act officially comes into effect - which will thus probably be 1 August. Depending on the risk level of your use case, you will have different deadlines to get compliant.

  • Unacceptable risk systems must be phased out within 6 months;
  • General purpose AI systems and requirements with penalties attached should be fixed within 12 months;
  • High risk systems must be compliant within 24 or 36 months (depends on the use case).

Why should I be compliant to the AI Act?

There are numerous reasons to be compliant to the AI Act:

  • The need for trust and transparency is growing;
  • AI compliance provides a framework that can accelerate innovation;
  • Non-compliance can lead to large penalties.

What are the penalties for not being compliant to the AI Act?

  • Non-compliance with prohibitions
    Up to EUR 35 million or 7% of global ATW
  • Non-compliance with other requirements
    Up to EUR 15 million or 3% of global ATW
  • Provide incorrect or incomplete information
    Up to EUR 7.5 million or 1% of global ATW

*ATW = Annual Turnover Worldwide

WARNING:

  • For SMEs, the lowest punishment of the 2 options is applicable.
  • The real penalty depends on the circumstances of the incident.
  • The penalties are lower for governments.

How do I know whether the AI Act is applicable to my use case and to what risk level my use case belongs?

The Future of Life Institute (FLI) developed the EU AI Act Compliance Checker that gives you an estimate whether your use case falls under the AI Act and under what risk category by just asking you a few questions.

The risk levels defined in the AI Act are the following:

  • AI systems:
    • Unacceptable risk
    • High risk
    • Limited risk
    • Minimal risk
    • Excluded
  • General purpose AI systems:
    • Systemic risk
    • No systemic risk

TIP: Use the links to the AI Act that are sometimes mentioned in the question to get more info about some terminology.

WARNING:

  • The risk relates to the use case, not to the model or the sector of the company!
  • Just because the AI ​​Act doesn't prohibit it doesn't mean it's a good idea!

What AI systems are unacceptable?

All AI systems that are prohibited are listed in Art. 5 of the AI Act.

Some examples include:

  • AI systems that are meant to manipulate or deceive people
  • AI systems that exploit vulnerabilities of people
  • AI systems meant for social scoring
  • AI systems for untargeted scraping of facial images from the internet or CCTV footage
  • Biometric categorisation systems

What AI systems have a high risk level?

All AI systems with a high risk are listed in Art. 6, Annex I and Annex III of the AI Act.

Some examples of those systems include:

  • AI systems integrated into products subject to EU harmonization regulations: machines, boats, radio systems, airplanes, elevators, trailers, agricultural implements, toys, …
  • Biometric systems: emotion recognition, real-time face recognition to grant or deny access to certain buildings, …
  • Management & operation of critical infrastructure: sensor data model to plan maintenance of bridges, …
  • Education & vocational training: models that influence the admission to a school, …
  • Employment & employee management: CV screening, models that decide when to fire employees, …
  • Access to essential private or public services: credit control models that decide whether someone qualifies for a mortgage
  • Police: predictive policing, …
  • Migration, asylum & border control: models that estimate the risk of irregular immigration in an airport, …
  • Justice & democratic processes: models used by a judicial authority to investigate facts and take decisions, …

What AI systems have a limited or minimal risk?

The exact criteria for limited and minimal risk are not listed in the AI Act itself, but some examples are listed in the high-level summary of the AI Act on the official website.

Some examples of AI systems with limited risk:

  • Chatbots
  • Deepfakes

Some examples of AI systems with minimal risk (at least in 2021; this is changing with generative AI):

  • AI enabled video games
  • Spam filters

What General-Purpose AI models have systemic risk?

The criteria for having systemic risks are listed in Art. 51 and Annex XIII of the AI Act.

In summary, those criteria are:

  • The number of parameters of the model (no exact numbers);
  • The quality or size of the dataset (no exact numbers);
  • The amount of computation used for training the model (training measured in floating point operations is greater than 1025);
  • The input and output modalities of the model (will be compared with the SOTA);
  • The benchmarks and evaluations of capabilities of the model (based on the number of tasks it can do without additional training, adaptability to learn new, distinct tasks, its level of autonomy and scalability, the tools it has access to);
  • Whether it has a high impact on the internal market due to its reach (at least 10 000 registered business users established in the EU);
  • The number of registered end-users (no exact numbers).

What do I have to do for each risk level?

Summary of unacceptable, high, limited, and minimal risk
  • Unacceptable risk (Art. 5)
    Remove the AI system from the market
  • High risk (Art. 6)
    • As a provider:
    • Risk analysis & management (Art. 9)
    • Responsible data governance (Art. 10)
    • Technical documentation (Art. 11)
    • Logging (Art. 12)
    • Transparant & clear information (Art. 13)
    • Human oversight (Art. 14)
    • Accurate & robust & secure model (Art. 15)
  • As a deployer:
    • Use the system as instructed
    • Provide human oversight
    • Provide relevant input data
    • Report issues to the provider
    • Inform users when needed
  • Limited risk
    Transparant & clear information (Art. 50)
  • Minimal risk
    • AI literacy
    • Code of conduct
  • Systemic risk (Art. 51)
    • Transparant & clear information (Art. 50)
    • Risk analysis & management
  • No systemic risk
    Transparant & clear information (Art. 50)

Will the AI Act limit innovation?

With the AI Act, one can still do almost anything. Only the use cases that conflict with human rights are prohibited.

The AI Act may initially slow down innovation as it takes time to understand the requirements. However, once these are clear, the established framework can be reused for future projects, streamlining the process.

What are other best practices to consider?

The AI Act does not solve everything. You still need to comply to other regulations and implement other best practices:

  • GDPR - enforce data privacy
  • AI governance - ensure that the AI model is trustworthy, responsible, ethical, honest, explainable, transparant, and liable
  • Change management - organize awareness sessions, training sessions tailored to the individual and organization, …

Authors

  • /

    Kyra Van Den Eynde, AI/CS Researcher, AI Lead

Want to know more about our team?

Visit the team page