OFFRE D'UNE DURÉE LIMITÉE. Obtenez 3 mois à 0,99 $/mois. Profiter de l'offre.
Page de couverture de AI Governance with Dr Darryl

AI Governance with Dr Darryl

AI Governance with Dr Darryl

Auteur(s): Dr Darryl
Écouter gratuitement

À propos de cet audio

In the race to harness AI’s potential, business leaders face unprecedented challenges. Join us as we decode AI governance, bridging the gap between innovation and responsibility. Learn how to steer your organization through the ethical maze and policy landscape of AI, ensuring you’re not just keeping pace, but setting the standard in this new frontier.Copyright 2024 All rights reserved. Économie
Épisodes
  • Ai Fabrication in the Courtroom
    Oct 6 2025

    The legal profession faces an accelerating governance crisis that should concern every senior leader overseeing regulatory frameworks and public accountability. Over 410 documented cases worldwide reveal lawyers submitting fabricated court citations generated by artificial intelligence, a problem that has exploded from a few incidents monthly to multiple cases daily in 2025

    The implications extend far beyond courtrooms. Client expectations for AI integration nearly doubled between 2024 and 2025, yet only 21 per cent of legal firms report comprehensive adoption frameworks, creating dangerous gaps between technological deployment and professional competence

    Voir plus Voir moins
    13 min
  • The Role of Experts in the Courtroom
    Oct 6 2025

    Expert witnesses are supposed to serve the court, not the party paying them – but across six major jurisdictions, the rules governing how this actually works differ so dramatically that international litigation has become a procedural minefield.

    As artificial intelligence begins reshaping expert testimony itself, understanding these jurisdictional frameworks has never been more urgent.

    Voir plus Voir moins
    18 min
  • Ai in the Courtroom
    Oct 6 2025

    Courts worldwide are navigating uncharted waters with artificial intelligence, and their radically different approaches reveal a governance crisis that demands immediate attention from senior leaders. Across eight major jurisdictions, courts have responded to generative AI with starkly contrasting frameworks: New South Wales has imposed categorical prohibitions on AI-generated witness evidence and mandates sworn declarations that AI was not used,¹ whilst Singapore takes a permissive stance requiring no disclosure unless specifically requested, placing full responsibility on individual practitioners.² This fragmentation is not merely academic. Courts in the United States and Australia have already sanctioned lawyers for filing submissions citing entirely fabricated cases generated by AI 'hallucinations', where systems like ChatGPT created plausible-sounding but completely fictitious legal precedents.³ The consequences extend far beyond professional embarrassment to fundamental questions about evidentiary integrity, access to justice for self-represented litigants, and the preservation of confidential information that may be inadvertently fed into public AI systems and become permanently embedded in their training data.⁴ The window for proactive governance is closing rapidly, yet no international consensus has emerged on how to balance innovation with risk management in the administration of justice. New Zealand has pioneered a three-tiered approach with separate guidelines for judges, lawyers and non-lawyers, recognising that different court users face fundamentally different obligations and capabilities,⁵ whilst the United Kingdom has focused exclusively on guidance for judicial officers without addressing practitioner conduct.⁶ For government executives responsible for policy development, regulatory frameworks, and public sector digitalisation, understanding these divergent approaches is not optional. The report exposes critical gaps in current governance models and demonstrates why courts are moving from permissive to restrictive regulation as verification mechanisms struggle to keep pace with technological advancement.⁷ Download the full analysis to understand how these judicial responses should inform your organisation's approach to AI governance, professional liability frameworks, and access to justice initiatives before fragmented regulation creates compliance nightmares across jurisdictions.

    Voir plus Voir moins
    14 min
Tout
Les plus pertinents
Who is Dr. Darryl? More importantly, who is Dr. Carleton? This is a lift of two unnamed, uncredited podcasters discussing AI Governance mostly regarding Australian politics. They are unified in their research; there is no one providing counterpoints. They finish each other's sentences. It is distracting how often they say "TOTALLY", "ABSOLUTELY", "EXACTLY" to what the other person just says. By episode 4, it's practically all I heard, taking away from what I could have learned about balanced, ethical implementation of AI in Australian universities, with oversight. In fact, every episode reveals the TOTALLY needed requirements for AI implementation. ABSOLUTELY needs balance, ethics and oversight. EXACTLY!!!

What IS this?

Un problème est survenu. Veuillez réessayer dans quelques minutes.