EU Delays AI Act's Strictest Rules Until 2027, Giving Tech Giants and SMEs Crucial Breathing Room
Échec de l'ajout au panier.
Veuillez réessayer plus tard
Échec de l'ajout à la liste d'envies.
Veuillez réessayer plus tard
Échec de la suppression de la liste d’envies.
Veuillez réessayer plus tard
Échec du suivi du balado
Ne plus suivre le balado a échoué
-
Narrateur(s):
-
Auteur(s):
À propos de cet audio
Think about it, listeners: the AI Act, Regulation (EU) 2024/1689, kicked off August 1, 2024, banning prohibited practices like social scoring by February 2025 and hitting general-purpose AI models—think OpenAI's GPTs—by August 2025. Providers like those behind foundation models now face the AI Office's sharpened claws, empowered under Article 75 to slap fines up to 3% of global turnover, per Trusaic's March 25 breakdown by Robert Sheen. But this Omnibus tweak clarifies the AI Office's role, excluding Annex I products while looping in same-provider general-purpose systems, and cuts the generative AI marking grace period from six to three months post-August 2026.
As a tech ethicist tweaking my own high-risk hiring algorithm, I feel the ripple. Businesses in healthcare, finance, and law enforcement—deployers in 27 member states—gain breathing room, but the clock ticks. Aurora Trust warns SMEs need 3-6 months for compliance audits, EU database registration, and human oversight training. Push Annex I references to Annex B, and suddenly embedded AI in regulated products dodges dual bureaucracy, slashing costs without skimping on safety.
This isn't delay for delay's sake; it's pragmatic evolution. The Council echoes Parliament, reinstating provider registrations and pushing AI sandboxes to December 2027. Extraterritorial bite means U.S. giants like Google must comply if outputs touch EU soil. Provocative question: Does this flexibility turbocharge EU innovation, or just let risky AI linger? In a world where GPAI blurs creator and deployer, the AI Office's implementing acts under Regulation 2019/1020 could redefine enforcement.
The Act's genius is risk-tiering—unacceptable risks banned, high-risk scrutinized—but implementation snags expose the human in the machine. As Quantamix notes, full enforcement looms by 2027, urging us to build trustworthy AI now.
Thanks for tuning in, listeners—subscribe for more deep dives. This has been a Quiet Please production, for more check out quietplease.ai.
Some great Deals https://amzn.to/49SJ3Qs
For more check out http://www.quietplease.ai
This content was created in partnership and with the help of Artificial Intelligence AI
Pas encore de commentaire