European Union officials have finalized a historic agreement on the Artificial Intelligence Act. This creates the world’s first comprehensive legal framework for artificial intelligence. The deal was reached after marathon negotiations in Brussels.

The landmark legislation aims to regulate AI based on its potential risks and impact. It represents a significant step in governing a technology that is rapidly transforming societies and economies globally.
Risk-Based Approach Defines New AI Rules
The new rules adopt a tiered, risk-based approach to regulation. Systems considered an unacceptable risk, like social scoring by governments, will be banned. High-risk AI used in critical areas will face strict obligations.
These areas include medical devices and critical infrastructure. According to Reuters, the act also sets clear requirements for general-purpose AI models. This ensures transparency and safety before they reach the public.
Balancing Innovation with Fundamental Rights
The legislation seeks a delicate balance between fostering innovation and protecting citizens. It establishes clear guidelines for the use of biometric identification by law enforcement. Real-time remote biometric identification in public spaces will be heavily restricted.
The rules are designed to build trust and legal certainty for developers and users. This framework is expected to set a global benchmark, similar to the EU’s data protection law.
The EU AI Act establishes a new global standard for artificial intelligence regulation, setting a precedent that other major economies are now likely to follow.
Thought you’d like to know
What is considered ‘high-risk’ AI under this act?
High-risk AI includes systems used in critical infrastructure, medical devices, and education. It also covers AI for employment selection and essential public services. These applications will require rigorous assessment and compliance.
When will the EU AI Act come into force?
The act is expected to be formally adopted by mid-2024. Its provisions will then be phased in over several years. Some bans on unacceptable risk AI could apply within just six months.
How does the act regulate foundation models like ChatGPT?
It imposes specific transparency obligations before they are released. Developers must provide detailed summaries of the training data used. They also need to implement robust cybersecurity measures.
What are the penalties for non-compliance?
Fines for violating the AI Act can be substantial. They can reach up to 35 million euros or 7% of a company’s global annual turnover. The exact amount depends on the infringement and the size of the company.
How will this affect companies outside the EU?
Any company offering AI systems in the EU single market must comply. This includes international tech firms and developers. The rules have an extraterritorial effect, much like the GDPR data law.
Trusted Sources
European Commission, Reuters, Associated Press, Bloomberg
iNews covers the latest and most impactful stories across
entertainment,
business,
sports,
politics, and
technology,
from AI breakthroughs to major global developments. Stay updated with the trends shaping our world. For news tips, editorial feedback, or professional inquiries, please email us at
[email protected].
Get the latest news first by following us on
Google News,
Twitter,
Facebook,
Telegram
, and subscribe to our
YouTube channel.



