The comprehensive EU AI Act, which aims to regulate artificial intelligence based on risk, has been officially published in the European Union’s Official Journal.

Starting from August 1, the new law will come into force, marking the beginning of a phased implementation process. By mid-2026, most provisions will be fully applicable to AI developers. However, various deadlines are set between now and then, with some extending even further as different legal requirements gradually take effect.

EU lawmakers reached a political agreement on the AI rulebook in December of last year. The framework imposes different obligations on AI developers based on the intended use and associated risks. While most AI applications, considered low-risk, will not be heavily regulated, a few high-risk use cases will face stringent rules.

High-risk use cases, such as biometric AI applications or AI used in law enforcement, employment, education, and critical infrastructure, are permitted under the law but come with obligations concerning data quality and bias prevention. Additionally, certain transparency requirements apply to developers of tools like AI chatbots and general-purpose AI (GPAI) models, including those powering technologies like OpenAI’s GPT.

Despite heavy lobbying by some AI industry stakeholders and a few member state governments to reduce obligations on GPAIs, the law maintains transparency and systemic risk assessment requirements for the most powerful GPAIs.

Initially, the list of prohibited AI uses will take effect six months after the law’s commencement, around early 2025. These “unacceptable risk” use cases, which will soon be illegal, include social credit scoring systems similar to those used in China, untargeted scraping of the Internet or CCTV to compile facial recognition databases, and real-time remote biometric surveillance by law enforcement in public places, except under specific conditions such as searching for missing persons.

Next, nine months after the law comes into force, around April 2025, codes of practice will begin to apply to developers of relevant AI applications. The EU AI Office, responsible for ecosystem-building and oversight, will provide these codes. However, the involvement of consultancy firms in drafting these guidelines has raised concerns about potential industry influence.

By August 1, 2025, the transparency requirements for GPAIs will start to apply. A subset of high-risk AI systems will have the most extended compliance deadline, with 36 months from the law’s commencement—until 2027—to meet their obligations. Other high-risk systems must comply within 24 months.

The EU AI Act represents a significant regulatory effort to balance innovation with risk management, aiming to ensure that AI technologies are developed and used responsibly within the region.

Share.
Leave A Reply

Exit mobile version