1. Website Planet
  2. >
  3. News
  4. >
  5. EU Bans High-Risk AI Systems as AI Act Comes Into Effect
EU Bans High-Risk AI Systems as AI Act Comes Into Effect

EU Bans High-Risk AI Systems as AI Act Comes Into Effect

Andrés Gánem Written by:
Christine Hoang Reviewed by: Christine Hoang
10 February 2025
February 2nd marked the first compliance deadline for the EU’s Artificial Intelligence (AI) Act, meaning that bloc regulators can now ban the use of AI systems deemed an “unacceptable risk.” The act, originally proposed in 2021, officially came into effect in August of last year.

According to the official legislation, the purpose of the AI Act is to “improve the functioning of the internal market and promote the uptake of human-centric and trustworthy artificial intelligence (AI), while ensuring a high level of protection of health, safety, fundamental rights enshrined in the Charter, including democracy, the rule of law and environmental protection, against the harmful effects of AI systems in the Union and supporting innovation.”

To this end, the legislation contains a number of regulations over models, companies, and individuals concerning different artificial intelligence systems.

This includes the bloc’s categorization of four risk levels for AI systems, each with its own level of oversight:
  • Minimal risk – includes simple systems like spam filters, which will face no oversight.
  • Limited risk – includes more human-interaction potential like chatbots, which will face limited oversight.
  • High risk – includes systems that can impact people’s well-being, like AI systems used for healthcare reasons, which will face heavy oversight.
  • Unacceptable risk – which will be banned outright.
The AI practices deemed to have an unacceptable level of risk include AI systems used for social scoring, exploiting vulnerabilities like gender or disability, collecting biometric information in public spaces, and predicting crimes based on a person’s appearance.

Systems seeking to manipulate a person’s decisions or practices subliminally also fall under the unacceptable risk category. This is especially noteworthy considering that a recent study by University College London and MIT researchers showed that AI systems exacerbate human bias over time.

Any company discovered to be deploying these models in the EU could face up to €35 million in fines or 7% of their annual revenue, whichever is greater. In September of last year, over 100 companies signed a pledge to start applying the act’s principles before it came into effect.

Recently, OpenAI also updated its terms and privacy policy in the EU to reduce regulatory risk.

The act is set to take effect in full by August 2026.

Rate this Article
5.0 Voted by 2 users
You already voted! Undo
This field is required Maximal length of comment is equal 80000 chars Minimal length of comment is equal 10 chars
Any comments?
Reply
View %s replies
View %s reply
More news
Show more
We check all user comments within 48 hours to make sure they are from real people like you. We're glad you found this article useful - we would appreciate it if you let more people know about it.
Popup final window
Share this blog post with friends and co-workers right now:
1 1 1

We check all comments within 48 hours to make sure they're from real users like you. In the meantime, you can share your comment with others to let more people know what you think.

Once a month you will receive interesting, insightful tips, tricks, and advice to improve your website performance and reach your digital marketing goals!

So happy you liked it!

Share it with your friends!

1 < 1 1

Or review us on 1

3571238
50
5000
114314099