
EU's AI Act Reshapes the Tech Landscape: From Bans to Transparency Demands
Échec de l'ajout au panier.
Veuillez réessayer plus tard
Échec de l'ajout à la liste d'envies.
Veuillez réessayer plus tard
Échec de la suppression de la liste d’envies.
Veuillez réessayer plus tard
Échec du suivi du balado
Ne plus suivre le balado a échoué
-
Narrateur(s):
-
Auteur(s):
À propos de cet audio
Let’s dive in: the Act is the world’s first full-spectrum legal framework for artificial intelligence, and the risk-based regime it established is re-coding business as usual. Picture this: if you’re deploying AI in Europe—yes, even if you’re headquartered in Boston or Bangalore—the Act’s tentacles wrap right around your operations. Everything’s categorized: from AI that’s totally forbidden—think social scoring or subliminal manipulation, both now banned as of February this year—to high-risk applications like biometrics and healthcare tech, which must comply with an arsenal of transparency, safety, and human oversight demands by August 2026.
General-Purpose AI is now officially in the regulatory hot seat. As of August 2, foundation model providers are expected to meet transparency, documentation, and risk assessment protocols. Translation: the era of black box models is over—or, at the very least, you’ll pay dearly for opacity. Fines reach as high as 7 percent of global revenue, or €35 million, whichever hurts more. ChatGPT, Gemini, LLaMA—if your favorite foundation model isn’t playing by the rules, Europe’s not hesitating.
What’s genuinely fascinating is the EU’s new scientific panel of independent experts. Launched just last month, this group acts as the AI Office’s technical eyes: they evaluate risks, flag systemic threats, and can trigger “qualified alerts” if something big is amiss in the landscape.
But don’t mistake complexity for clarity. The Commission’s delayed draft release of the General-Purpose AI Code of Practice this July exposed deeper ideological fault lines. There’s tension between regulatory zeal and the wild-west energy of AI’s biggest players—and a real epistemic gap in what, precisely, constitutes responsible general-purpose AI. Critics, like Kristina Khutsishvili at Tech Policy Press, say even with three core chapters on Transparency, Copyright, and Safety, the regulation glosses over fundamental problems baked into how these systems are created and how their real-world risks are evaluated.
Meanwhile, the European Commission’s latest move—a public consultation on transparency rules for AI, especially around deepfakes and emotion recognition tech—shows lawmakers are crowdsourcing practical advice as reality races ahead of regulatory imagination.
So, the story here isn’t just Europe writing the rules; it’s about the rest of the world watching, tweaking, sometimes kvetching, and—more often than they’ll admit—copying.
Thank you for tuning in. Don’t forget to subscribe. This has been a quiet please production, for more check out quiet please dot ai.
Some great Deals https://amzn.to/49SJ3Qs
For more check out http://www.quietplease.ai
Pas encore de commentaire