HEADLINE: "The EU's AI Act: A Stealthy Global Software Update Reshaping the Future"
Échec de l'ajout au panier.
Veuillez réessayer plus tard
Échec de l'ajout à la liste d'envies.
Veuillez réessayer plus tard
Échec de la suppression de la liste d’envies.
Veuillez réessayer plus tard
Échec du suivi du balado
Ne plus suivre le balado a échoué
-
Narrateur(s):
-
Auteur(s):
À propos de cet audio
The AI Act is already law across the European Union, but, as Wikipedia’s timeline makes clear, most of the heavy-duty obligations only phase in between now and the late 2020s. It is risk‑based by design: some AI uses are banned outright as “unacceptable risk,” most everyday systems are lightly touched, and a special “high‑risk” category gets the regulatory equivalent of a full penetration test and continuous monitoring.
Here’s where the past few weeks get interesting. On 19 November 2025, the European Commission dropped what lawyers are calling the Digital Omnibus on AI. Compliance and Risks, Morrison Foerster, and Crowell and Moring all point out the same headline: Brussels is quietly delaying and reshaping how the toughest parts of the AI Act will actually bite. Instead of a hard August 2026 start date for high‑risk systems, obligations will now kick in only once the Commission confirms that supporting infrastructure exists: harmonised standards, technical guidance, and an operational AI Office.
For you as a listener building or deploying AI, that means two things at once. First, according to EY and DLA Piper style analyses, the direction of travel is unchanged: if your model touches medical diagnostics, hiring, credit scoring, law enforcement, or education, Europe expects logging, human oversight, robustness testing, and full documentation, all auditable. Second, as Goodwin and JDSupra note, the real deadlines slide out toward December 2027 and even August 2028 for many high‑risk use cases, buying time but also extending uncertainty.
Meanwhile, the EU is centralising power. The new AI Office inside the European Commission, described in detail on the Commission’s own digital strategy pages and by several law firms, will police general‑purpose and foundation models, especially those behind very large online platforms and search engines. Think of it as a kind of European model regulator with the authority to demand technical documentation, open investigations, and coordinate national watchdogs.
Member states are not waiting passively. JDSupra reports that Italy, with Law 132 of 2025, has already built its own national AI framework that plugs into the EU Act. The European Union Agency for Fundamental Rights has been publishing studies on how to assess “high‑risk AI” against fundamental rights, shaping how regulators will interpret concepts like discrimination, transparency, and human oversight in practice.
The meta‑story is this: the EU tried to ship a complete AI operating system in one go. Now, under pressure from industry and standard‑setters like CEN and CENELEC who admit key technical norms won’t be ready before late 2026, it is hot‑patching the rollout. The philosophical bet, often compared to what happened with GDPR, is that if you want to reach European users, you will eventually design to European values: safety, accountability, and human rights by default.
The open question for you, the listener, is whether this becomes the global baseline or a parallel track that only some companies bother to follow. Does your next model sprint treat the AI Act as a blocker, a blueprint, or a competitive weapon?
Thanks for tuning in, and don’t forget to subscribe so you don’t miss the next deep dive into the tech that’s quietly rewriting the rules of everything around you. This has been a quiet please production, for more check out quiet please dot ai.
Some great Deals https://amzn.to/49SJ3Qs
For more check out http://www.quietplease.ai
This content was created in partnership and with the help of Artificial Intelligence AI
Pas encore de commentaire