Page de couverture de AI or Not

AI or Not

AI or Not

Auteur(s): Pamela Isom
Écouter gratuitement

À propos de cet audio

Welcome to "AI or Not," the podcast where digital transformation meets real-world wisdom, hosted by Pamela Isom. With over 25 years of guiding the top echelons of corporate, public and private sectors through the ever-evolving digital landscape, Pamela, CEO and Founder of IsAdvice & Consulting LLC, is your expert navigator in the exploration of artificial intelligence, innovation, cyber, data, and ethical decision-making. This show demystifies the complexities of AI, digital disruption, and emerging technologies, focusing on their impact on business strategies, governance, product innovations, and societal well-being. Whether you're a professional seeking to leverage AI for sustainable growth, a leader aiming to navigate the digital terrain ethically, or an innovator looking to make a meaningful impact, "AI or Not" offers a unique blend of insights, experiences, and discussions that illuminate the path forward in the digital age. Join us as we delve into the world where technology meets humanity, with Pamela Isom leading the conversation.




© 2025 AI or Not
Développement commercial et entrepreneuriat Entrepreneurship Gestion et leadership Économie
Épisodes
  • E040 – AI or Not – Pupak Mohebali and Pamela Isom
    Sep 9 2025

    Welcome to "AI or Not," the podcast where we explore the intersection of digital transformation and real-world wisdom, hosted by the accomplished Pamela Isom. With over 25 years of experience guiding leaders in corporate, public, and private sectors, Pamela, the CEO and Founder of IsAdvice & Consulting LLC, is a veteran in successfully navigating the complex realms of artificial intelligence, innovation, cyber issues, governance, data management, and ethical decision-making.

    The complex landscape of AI governance demands more than theoretical frameworks; it requires practical bridges between policy and implementation. Dr. Pupak Mohebali, AI policy consultant and researcher with a background in international security, brings a refreshingly grounded perspective to this challenge.

    Dr. Mohebali reveals how her multidisciplinary experience shapes her approach to making AI governance accessible. "Most organizations aren't lacking frameworks," she explains. "They're lacking translation between policy and practice." Her AI governance starter kit transforms abstract principles into straightforward questions: What AI tools are we using? Who's responsible if something goes wrong? What data feeds these systems? This practical approach helps teams engage with governance without feeling overwhelmed by complexity.

    The conversation challenges the dangerous myth that "AI is just a tool." Every AI system reflects human decisions about data selection, goals, and beneficiaries. By pretending AI is neutral, we shift blame from designers and organizations to the technology itself, an abdication of responsibility that Dr. Mohebali firmly rejects. This perspective connects directly to the ongoing importance of AI literacy, not to make everyone a technical expert, but to empower people to ask meaningful questions about how AI affects their lives.
    Perhaps most eye-opening is the discussion of AI's hidden environmental footprint. Training large models can generate emissions equivalent to those of five cars over their entire lifespan, while services like ChatGPT potentially consume 500,000 kilowatt-hours daily. These costs remain largely invisible, particularly when systems operate through remote cloud services. "We need more than incentives," Dr. Mohebali argues. "Environmental considerations must be mandatory in regulations from the outset."

    The conversation concludes with a powerful insight: AI ethics isn't a fixed endpoint but a process that continuously questions who defines ethical standards and who benefits. Want to develop a more nuanced understanding of AI governance that balances innovation with responsibility? This episode offers practical wisdom for navigating the complex intersection of technology, policy, and human impact. Subscribe and join the conversation about creating AI systems that truly serve humanity.

    Voir plus Voir moins
    26 min
  • E039 – AI or Not – Stephen Pullum and Pamela Isom
    Aug 26 2025

    Welcome to "AI or Not," the podcast where we explore the intersection of digital transformation and real-world wisdom, hosted by the accomplished Pamela Isom. With over 25 years of experience guiding leaders in corporate, public, and private sectors, Pamela, the CEO and Founder of IsAdvice & Consulting LLC, is a veteran in successfully navigating the complex realms of artificial intelligence, innovation, cyber issues, governance, data management, and ethical decision-making.

    Dive into a mind-expanding journey through four decades of technology evolution with Stephen Pullum, a veteran who's been working with AI since before most people knew it existed. From his early days programming a Commodore 64 in 1982 to managing sophisticated AI systems for the Air Force in the late 1980s, Stephen offers a rare historical perspective that helps us understand today's AI revolution.

    Stephen introduces us to the crucial role of the Chief AI Security Officer (CAISO)—a position he's pioneered to bridge the dangerous gap between traditional cybersecurity and AI governance. "Your basic CISO doesn't understand AI systems," he explains, "while your AI officers don't understand enterprise security." This disconnect creates vulnerabilities that organizations are only beginning to recognize.

    The conversation takes a fascinating turn when Stephen shares his experiments with agentic AI systems like Mantis and GenSpark. Through a technique he calls "shadow prompting," he demonstrates how these autonomous agents can function more as partners than tools, making decisions and collaborating without constant human intervention. Imagine a world where multiple AI agents verify each other's work before humans even see it—that future is closer than we think.

    Perhaps most thought-provoking is Stephen's challenge to conventional wisdom about AI guardrails and bias. He makes a critical distinction between policies (which people follow or break) and true guardrails (which actively prevent harm), arguing that many organizations confuse the two. And on the controversial topic of AI bias, he offers a perspective that will make you question common assumptions: "There isn't any such thing as AI bias. AI is programmed by individuals who are in their own communities. Anything outside their communities doesn't fit into the algorithm."

    Whether you're a seasoned AI professional or just beginning to explore this field, Stephen's insights from the frontlines of technology evolution will expand your understanding of where we've been and where we're heading. As he says with infectious enthusiasm, "Enjoy the ride. You don't know what's in the labs. You have no idea what's coming next."

    Voir plus Voir moins
    51 min
  • E038 - AI or Not - Evan Benjamin and Pamela Isom
    Aug 12 2025

    Welcome to "AI or Not," the podcast where we explore the intersection of digital transformation and real-world wisdom, hosted by the accomplished Pamela Isom. With over 25 years of experience guiding leaders in corporate, public, and private sectors, Pamela, the CEO and Founder of IsAdvice & Consulting LLC, is a veteran in successfully navigating the complex realms of artificial intelligence, innovation, cyber issues, governance, data management, and ethical decision-making.

    The rapid advancement of artificial intelligence has created an urgent need for a deeper understanding beyond basic prompt engineering. In this illuminating conversation with Evan Benjamin, a senior project delivery consultant and AI specialist, we uncover the critical infrastructure considerations that are often overlooked in the rush to adopt the latest AI technologies.

    Evan shares his remarkable journey from legal tech expert to AI infrastructure specialist, highlighting how the worlds of e-discovery and generative AI have converged in unexpected ways. What began as attorneys experimenting with prompts has evolved into complex, multi-agent systems that require entirely new approaches to implementation and security.

    One of the most compelling insights centers on how organizations approach AI tools—treating them as simple product upgrades rather than fundamentally different technologies with unique security implications. "We're beta testers for OpenAI and Anthropic, but we're completely neglecting our own privacy and security," Evan warns. This cavalier approach extends to skipping essential documentation, such as model cards, which contain critical information about capabilities and limitations.

    We explore the evolution of threat modeling for AI systems, examining why traditional cybersecurity frameworks, such as STRIDE or PASTA, cannot be applied directly to AI environments. New frameworks such as MAESTRO (designed specifically for multi-agent environments) and the OWASP Top 10 for LLMs represent more appropriate approaches for identifying AI-specific threats. With new attack surfaces emerging through agentic AI, organizations must adapt their security practices accordingly.

    The conversation takes a fascinating turn toward AI literacy, particularly examining how the EU AI Act establishes a higher standard than many organizations currently achieve. While companies claim to prioritize AI adoption, true literacy extends far beyond basic prompt abilities to comprehensive knowledge of the AI lifecycle. This literacy gap presents significant challenges but also opportunities for those willing to invest in deeper understanding.

    As we transition from the era of LLMs to what Evan calls "the year of agents and agentic AI," your organization's approach to implementation, security, and governance must evolve in tandem with the technology. Take the first step today by committing to improve your AI knowledge by just 2% daily—whether through videos, books, or articles—and watch how quickly your understanding transforms.



    Voir plus Voir moins
    57 min
Pas encore de commentaire