European Union lawmakers have officially approved the world’s first comprehensive AI law. The groundbreaking legislation, known as the EU AI Act, was finalized after lengthy negotiations. It establishes a clear legal framework for artificial intelligence development and use.

The new rules aim to foster innovation while managing the risks posed by the most powerful AI systems. According to Reuters, the law categorizes AI applications by risk level, imposing the strictest regulations on technologies deemed unacceptable.
A Risk-Based Approach to Artificial Intelligence
The AI Act operates on a tiered system. Applications with “unacceptable risk,” such as social scoring by governments, are banned outright. High-risk AI, used in critical areas like medical devices and infrastructure, will face stringent requirements for transparency and oversight.
Lower-risk applications, including chatbots and recommendation algorithms, will be subject to lighter transparency rules. Minimal risk systems, like AI-powered spam filters, are largely left unregulated. This structure is designed to be proportionate, ensuring rules match the potential for harm.
Global Impact and Enforcement Timeline
The law’s influence is expected to extend far beyond Europe’s borders. Much like the EU’s GDPR data privacy rules, the AI Act sets a de facto global standard. International companies seeking access to the EU’s vast single market will be compelled to comply with its provisions.
The rules will be phased in over the coming years. Some bans on prohibited practices will take effect within six months. The full regulatory framework for general-purpose AI models will be active by mid-2026. This gives companies and regulators time to adapt to the new legal environment.
The EU AI Act represents a fundamental shift in how powerful technology is governed, setting a precedent that nations worldwide are now likely to follow.
A quick knowledge drop for you
What is considered ‘high-risk’ AI under the new law?
High-risk AI includes systems used in critical infrastructure, medical devices, education, and law enforcement. These applications must meet strict data governance, risk assessment, and human oversight requirements before they can be deployed.
How will the law affect popular AI chatbots?
Chatbots and generative AI tools must be clearly labeled. Their developers must also publish detailed summaries of the copyrighted data used to train their models, ensuring greater transparency for users and creators.
What are the penalties for violating the AI Act?
Fines for non-compliance are severe. They can reach up to 35 million euros or 7% of a company’s global annual turnover, whichever is higher. This creates a strong incentive for companies to adhere to the rules.
When do companies need to be fully compliant?
Most provisions will be fully applicable 24 months after the law enters into force. However, codes of practice for general-purpose AI will apply sooner, and bans on unacceptable AI practices will start in just six months.
Does this law apply to companies based outside of Europe?
Yes. Any company that develops or deploys AI systems within the EU market must comply. This includes U.S. and Chinese tech giants, making the regulation a powerful global benchmark.
iNews covers the latest and most impactful stories across
entertainment,
business,
sports,
politics, and
technology,
from AI breakthroughs to major global developments. Stay updated with the trends shaping our world. For news tips, editorial feedback, or professional inquiries, please email us at
[email protected].
Get the latest news first by following us on
Google News,
Twitter,
Facebook,
Telegram
, and subscribe to our
YouTube channel.



