- Minimal risk – includes simple systems like spam filters, which will face no oversight.
- Limited risk – includes more human-interaction potential like chatbots, which will face limited oversight.
- High risk – includes systems that can impact people’s well-being, like AI systems used for healthcare reasons, which will face heavy oversight.
- Unacceptable risk – which will be banned outright.
![EU Bans High-Risk AI Systems as AI Act Comes Into Effect](https://dt2sdf0db8zob.cloudfront.net/wp-content/uploads/2025/02/European-Parliament-headquarters-in-Strasbourg-1.webp)
EU Bans High-Risk AI Systems as AI Act Comes Into Effect
February 2nd marked the first compliance deadline for the EU’s Artificial Intelligence (AI) Act, meaning that bloc regulators can now ban the use of AI systems deemed an “unacceptable risk.” The act, originally proposed in 2021, officially came into effect in August of last year.
According to the official legislation, the purpose of the AI Act is to “improve the functioning of the internal market and promote the uptake of human-centric and trustworthy artificial intelligence (AI), while ensuring a high level of protection of health, safety, fundamental rights enshrined in the Charter, including democracy, the rule of law and environmental protection, against the harmful effects of AI systems in the Union and supporting innovation.”
To this end, the legislation contains a number of regulations over models, companies, and individuals concerning different artificial intelligence systems.
This includes the bloc’s categorization of four risk levels for AI systems, each with its own level of oversight: