Épisodes

  • Ep 98: Empowering an AI-Ready Generation to Learn, Create, and Lead with Jeff Riley
    Dec 19 2025
    Bob Pulver speaks with Jeff Riley, former Massachusetts Commissioner of Education and Executive Director of Day of AI, a nonprofit launched out of MIT. They explore the urgent need for AI literacy in K-12 education, the responsibilities of educators, parents, and policymakers in the AI era, and how Day of AI is building tools, curricula, and experiences that empower students to engage with AI critically and creatively. Jeff shares both inspiring examples and sobering warnings about the risks and rewards of AI in the hands of the next generation. Keywords Day of AI, MIT RAISE, responsible AI, AI literacy, K-12 education, student privacy, AI companions, Common Sense Media, AI policy, AI ethics, educational technology, AI curriculum, teacher training, creativity, critical thinking, digital natives, student agency, future of education, AI and the arts, cognitive offloading, generative AI, AI hallucinations, PISA 2029, AI festival Takeaways Day of AI is equipping teachers, students, and families with tools and curricula to understand and use AI safely, ethically, and productively. AI literacy must start early and span disciplines; it’s not just for coders or computer science classes. Students are already interacting with AI — often without adults realizing it — including the widespread use of AI companions. A core focus of Day of AI is helping students develop a healthy skepticism of AI tools, rather than blind trust. Writing, critical thinking, and domain knowledge are essential guardrails as students begin to use AI more frequently. The AI Festival and student policy simulation initiatives give youth a voice in shaping the future of AI governance. AI presents real risks — from bias and hallucinations to cognitive offloading and emotional detachment — especially for children. Higher education and vocational programs are beginning to respond to AI, but many are still behind the curve. Quotes “AI is more powerful than a car — and yet we’re throwing the keys to our kids without requiring any kind of driver’s ed.” “We want kids to be skeptical and savvy — not just passive consumers of AI.” “Students are already using AI companions, but most parents have no idea. That gap in awareness is dangerous.” “Writing is thinking. If we outsource writing, we risk outsourcing thought itself.” “The U.S. invented AI — but we risk falling behind on AI literacy if we don’t act now.” “Our goal isn’t to scare people. It’s to prepare them — and let young people lead where they’re ready.” Chapters 00:00 - Welcome and Introduction to Jeff Riley 01:11 - From Commissioner to Day of AI 02:52 - MIT Partnership and the Day of AI Mission 04:13 - Global Reach and the Need for AI Literacy 06:37 - Resources and Curriculum for Educators 08:18 - Defining Responsible AI for Kids and Schools 11:00 - AI Companions and the Parent Awareness Gap 13:51 - Critical Thinking and Cognitive Offloading 16:30 - Student Data Privacy and Vendor Scrutiny 21:03 - Encouraging Creativity and the Arts with AI 24:28 - PISA’s New AI Literacy Test and National Readiness 30:45 - Staying Human in the Age of AI 34:32 - Higher Ed’s Slow Adoption of AI Literacy 39:22 - Surfing the AI Wave: Teacher Buy-In First 42:35 - Student Voice in AI Policy 46:24 - The Ethics of AI Use in Interviews and Assessments 53:25 - Creativity, No-Code Tools, and Future Skills 55:18 - Final Thoughts and Festival Info Jeff Riley: https://www.linkedin.com/in/jeffrey-c-riley-a110608b Day of AI: https://dayofai.org For advisory work and marketing inquiries: Bob Pulver:⁠⁠ ⁠https://linkedin.com/in/bobpulver⁠⁠⁠ Elevate Your AIQ:⁠⁠ ⁠https://elevateyouraiq.com⁠⁠⁠ Substack: ⁠https://elevateyouraiq.substack.com⁠
    Voir plus Voir moins
    57 min
  • Ep 97: Challenging the AI Narrative and Redefining Digital Fluency with Jeff and MJ Pennington
    Dec 12 2025
    Bob sits down with Jeff Pennington, former Chief Research Informatics Officer at the Children’s Hospital of Philadelphia (CHOP) and author of You Teach the Machines, and his daughter Mary Jane (MJ) Pennington, a recent Colby College graduate working in rural healthcare analytics. Jeff and MJ reflect on the real-time impact of AI across generations—from how Gen Z is navigating AI’s influence on learning and careers, to how large institutions are integrating AI technologies. They dig into themes of trust, disconnection, data quality, and what it truly means to be future-proof in the age of AI. Keywords AI literacy, Gen Z, future of work, healthcare AI, trusted data, responsible AI, education, automation, disconnection, skills, strategy, adoption, social media, transformation Takeaways Gen Z’s experience with AI is shaped by a rapid-fire sequence of disruptions: COVID, remote learning, and now Gen AI Both podcast and book You Teach the Machines serve as a “time capsule” for capturing AI’s societal impact Orgs are inadvertently cutting off AI-native talent from the workforce Misinformation, over-hype, and poor PR from big tech are fueling widespread public fear and distrust of AI AI adoption must move from top-down mandates to bottom-up innovation, empowering frontline workers Data quality is a foundational issue, especially in healthcare and other high-stakes domains Real opportunity is in leveraging AI to elevate human work through augmentation, creativity, and access Disconnection and over-reliance on AI are emerging as long-term social risks, especially for younger generations Quotes “It’s a universal fear now. Everyone has to ask: what makes you AI-proof?” “The vitality of democracy depends on popular knowledge of complex questions.” “We're not being given the option to say no to any of this.” “I’m 100% certain the current winners in AI will not be the winners in five to ten years.” Chapters 00:02 Welcome and Guest Introductions 00:48 MJ’s Path: From Computational Biology to Rural Healthcare 01:52 Why They Launched the Podcast You Teach the Machines 03:25 Jeff’s Work at CHOP and the Pediatric LLM Project 06:47 Making AI Understandable: The Book’s Purpose 09:11 Navigating Fear and Trust in AI Headlines 11:31 Gen Z, AI-Proof Careers, and Entry-Level Job Loss 16:33 Why Resilience is Gen Z’s Underrated Superpower 18:48 Disconnection, Dopamine, and the Social Cost of AI 22:42 AI’s PR Problem and the Survival Signals We're Ignoring 25:58 Chatbots as Addictive Companions: Where It Gets Dark 29:56 Choosing to Innovate: A More Hopeful AI Future 32:11 The Dirty Truth About Data Quality and Trust 36:20 How a Brooklyn Coffee Company Fine-Tuned AI with Their Own Data 40:12 Why “Throwing AI on It” Isn’t a Strategy 44:20 Measuring Productivity vs. Driving Meaningful Change 48:22 The Real ROI: Empowering People, Not Eliminating Them 53:26 Healthcare’s Lazy AI Priorities (and What We Should Do Instead) 57:12 How Gen Z Was Guided Toward Coding—And What Happens Now 59:37 Dependency, Education, and Democratizing Understanding 1:04:22 AI’s Impact on Educators, Students, and Assessment 1:07:03 The Real Threat Isn’t Just Job Loss—It’s Human Disconnection 1:10:01 Defaulting to AI: Why Saying "No" Is No Longer an Option 1:12:30 Final Thoughts and Where to Find Jeff and MJ’s Work Jeff Pennington: https://www.linkedin.com/in/penningtonjeff/ Mary Jane Pennington: https://www.linkedin.com/in/maryjane-pennington-31710a175/ You Teach The Machines (book): https://www.audible.com/pd/You-Teach-the-Machines-Audiobook/B0G27833N9 You Teach The Machines (podcast): https://open.spotify.com/show/4t6TNeuYTaEL1WbfU5wsI0?si=bb2b1ec0b53d4e4e For advisory work and marketing inquiries: Bob Pulver:⁠⁠ ⁠https://linkedin.com/in/bobpulver⁠⁠⁠ Elevate Your AIQ:⁠⁠ ⁠https://elevateyouraiq.com⁠⁠⁠ Substack: ⁠https://elevateyouraiq.substack.com⁠
    Voir plus Voir moins
    1 h et 10 min
  • Ep 96: Building Learning Communities for a Responsible Future of Work with Enrique Rubio
    Dec 5 2025
    Bob Pulver sits down with community builder and HR influencer Enrique Rubio, founder of Hacking HR. Enrique shares his journey from engineering to HR, his time building multiple global communities, and why he ultimately returned “home” to Hacking HR to pursue its mission of democratizing access to high-quality learning. Bob and Enrique discuss the explosion of AI programs, the danger of superficial “prompting” education, the urgent need for governance and ethics, and the risks organizations face when employees use AI without proper training or oversight. It’s an honest, energizing conversation about community, trust, and building a responsible future of work. Keywords Enrique Rubio, Hacking HR, Transform, community building, democratizing learning, HR capabilities, AI governance, AI ethics, shadow AI, responsible AI, critical thinking, AI literacy, organizational risk, data privacy, HR community, learning access, talent development Takeaways Hacking HR was founded to close capability gaps in HR and democratize access to world-class learning at affordable levels. The community’s growth accelerated during COVID when others paused events; Enrique filled the gap with accessible virtual learning. Many AI programs focus narrowly on prompting rather than teaching leaders to think, govern, and transform responsibly. Companies must assume employees and managers are already using AI and provide clear do’s and don’ts to mitigate risk. Untrained use of AI in hiring, promotions, and performance management poses serious liability and fairness concerns. Critical thinking is declining, and generative AI risks accelerating that trend unless individuals stay engaged in the reasoning process. Community must be built for the right reasons—transparency, purpose, and service—not just lead generation or monetization. AI strategies often overlook workforce readiness; literacy and governance are as important as tools and efficiency goals. Quotes “Hacking HR is home for me.” “We’re here to democratize access to great learning and great community.” “Prompting is becoming an obsolete skill—leaders need to learn how to think in the age of AI.” “Assume everyone creating something on a computer is using AI in some capacity.” “If managers make decisions based on AI without training, that’s a massive liability.” “Most AI strategies can be summarized in one line: we’re using AI to be more efficient and productive.” Chapters 00:00 Catching up and meeting in person at recent events 01:18 Enrique’s career journey and return to Hacking HR 04:43 Democratizing learning and supporting a global HR community 07:17 The early days of running virtual conferences alone 09:39 Why affordability and access are core to Hacking HR’s mission 13:13 The rise of AI programs and the noise in the market 15:58 Prompting vs. true strategic AI leadership 18:21 The importance of community intent and transparency 20:42 Training leaders to think, reskill, and govern in the age of AI 23:05 Dangers of data misuse, privacy gaps, and dark-web training sets 26:08 Critical thinking decline and AI’s impact on cognition 29:16 Trust, data provenance, and risks in recruiting use cases 31:48 The need for organizational AI manifestos 32:47 Managers using AI for people decisions without training 35:12 Why governance is essential for fairness and safety 39:12 The gap between stated AI strategies and people readiness 43:54 Accountability across the AI vendor chain 46:18 Who should lead AI inside organizations 49:28 Responsible innovation and redesigning work 53:06 Enrique’s personal AI tools and closing reflections Enrique Rubio: https://www.linkedin.com/in/rubioenrique Hacking HR: https://hackinghr.io For advisory work and marketing inquiries: Bob Pulver:⁠⁠ ⁠https://linkedin.com/in/bobpulver⁠⁠⁠ Elevate Your AIQ:⁠⁠ ⁠https://elevateyouraiq.com⁠⁠⁠ Substack: ⁠https://elevateyouraiq.substack.com⁠
    Voir plus Voir moins
    55 min
  • Ep 95: Confronting the Realities of Successful AI Transformation with Sandra Loughlin
    Nov 28 2025
    Bob Pulver and Sandra Loughlin explore why most narratives about AI-driven job loss miss the mark and why true productivity gains require deep changes to processes, data, and people—not just new tools. Sandra breaks down the realities of synthetic experts, digital twins, and the limits of current enterprise data maturity, while offering a grounded, hopeful view of how humans and AI will evolve together. With clarity and nuance, she explains the four pillars of AI literacy, the future of work, and why leaning into AI—despite discomfort—is essential for progress. Keywords Sandra Loughlin, EPAM, learning science, transformation, AI maturity, synthetic agents, digital twins, job displacement, data infrastructure, process redesign, AI literacy, enterprise AI, productivity, organizational change, responsible innovation, cognitive load, future of work Takeaways Claims of massive AI-driven job loss overlook the real drivers: cost-cutting and reinvestment, not productivity gains. True AI value depends on re-engineering workflows, not automating isolated tasks. Synthetic experts and digital twins will reshape expertise, but context and judgment still require humans. Enterprise data bottlenecks—not technology—limit AI’s ability to scale. Humans need variability in cognitive load; eliminating all “mundane” work isn’t healthy or sustainable. AI natives—companies built around data from day one—pose real disruption threats to incumbents. Productivity gains may increase demand for work, not reduce it, echoing Jevons’ Paradox. AI literacy requires understanding technology, data, processes, and people—not just tools. Quotes “Only about one percent of the layoffs have been a direct result of productivity from AI.” “If you automate steps three and six of a process, the work just backs up at four and seven.” “Synthetic agents trained on true expertise are what people should be imagining—not email-writing bots.” “AI can’t reflect my judgment on a highly complex situation with layered context.” “To succeed with AI, we have to lean into the thing that scares us.” “Humans can’t sustain eight hours of high-intensity cognitive work—our brains literally need the boring stuff.” Chapters 00:00 Introduction and Sandra’s role at EPAM 01:39 Who EPAM serves and what their engineering teams deliver 03:40 Why companies misunderstand AI-driven job loss 07:28 Process bottlenecks and the real limits of automation 10:51 AI maturity in enterprises vs. AI natives 14:11 Why generic LLMs fail without specialized expertise 16:30 Synthetic agents and digital twins 18:30 What makes workplace AI truly dangerous—or transformative 23:20 Data challenges and the limits of enterprise context 26:30 Decision support vs. fully autonomous AI 31:48 How organizations should think about responsibility and design 34:21 AI natives and market disruption 36:28 Why humans must lean into AI despite discomfort 41:11 Human trust, cognition, and the need for low-intensity work 45:54 Responsible innovation and human-AI balance 50:27 Jevons’ Paradox and future work demand 54:25 Why HR disruption is coming—and why that can be good 58:15 The four pillars of AI literacy 01:02:05 Sandra’s favorite AI tools and closing thoughts Sandra Loughlin: https://www.linkedin.com/in/sandraloughlin EPAM: https://epam.com For advisory work and marketing inquiries: Bob Pulver:⁠⁠ ⁠https://linkedin.com/in/bobpulver⁠⁠⁠ Elevate Your AIQ:⁠⁠ ⁠https://elevateyouraiq.com⁠⁠⁠ Substack: ⁠https://elevateyouraiq.substack.com⁠
    Voir plus Voir moins
    1 h et 3 min
  • Ep 94: Redefining Recruitment For a More Human-Centric Hiring Experience with Keith Langbo
    Nov 21 2025
    Bob Pulver speaks with Keith Langbo, CEO and founder of Kelaca, about redefining recruitment in the AI era. Keith shares why he founded Kelaca to prioritize people over process, how core values like kindness and collaboration shape culture, and why trust and choice must be built into AI-powered recruiting tools. Bob and Keith explore evolving models of hiring, including fractional workforces, agentic systems, and data-informed decision-making — all rooted in a future where humans remain in control of the technology that serves them. Keywords Keith Langbo, Kelaca, recruitment, hiring, talent acquisition, AI in recruiting, agentic systems, culture add, core values, psychometrics, responsible AI, fractional workforce, gig economy, recruiting automation, candidate experience, structured interviews, Kira, human-centric design, AI trust, global hiring, digital agents, recruitment tech, NLP sourcing, recruiting innovation Takeaways Keith founded Kelaca to humanize the recruitment experience, treating people as partners — not products. Modern recruiting must shift from transactional, resume-driven models to more consultative, intelligence-based practices. AI’s greatest value lies in giving candidates and clients choice, not replacing humans — especially for real-time updates and communication preferences. Recruiters should move from “human-in-the-loop” to “humans in control” — using AI to augment but not automate judgment. Future hiring models may rely on digital agents representing both candidates and employers, enabling richer, data-driven matches. Core values — like kindness, accountability, and enthusiasm — are essential to maintaining culture across full-time and fractional teams. Structured data is key to overcoming bias and improving hiring quality, but psychometrics alone can't capture experience or growth. Many current tools automate broken processes; real innovation requires first rethinking what “better” hiring looks like. Quotes “I wanted to treat people like people, not like products.” “AI powered but human driven — that’s the experience I want to create.” “Resumes are broken. Interviews are often charisma contests. We can do better.” “Humans don’t just need to be in the loop — they need to be in control.” “I don’t care if you’re full-time or fractional. You still need to show kindness and a willingness to learn.” “We’re on the verge of bots talking to bots. That’s exciting — and terrifying.” Chapters 00:00 Introduction and Keith’s mission behind founding Kelaca 02:35 The candidate and client frustrations with traditional recruiting 05:10 Why resumes and interviews are broken — and what to do instead 07:10 Building feedback loops and AI-enabled candidate communication 10:45 Choice and context in AI tools: respecting human preference 13:44 From “human in the loop” to “human in control” 18:12 Agentic hiring and the rise of digital representation 25:10 Gig work and applying culture fit to fractional talent 29:34 Core values as the foundation of culture, not employment status 33:22 Responsible AI, fairness, and trust in hiring decisions 40:00 The hype cycle of recruiting tech and design thinking 42:56 AI as the modern calculator: from caution to capability 47:16 Global perspectives: AI adoption in US vs UK recruiting 53:08 Keith’s favorite AI tools and Kelaca’s new product, Kira 56:28 Closing thoughts and appreciation Keith Langbo: https://www.linkedin.com/in/keithlangbo Kelaca: https://kelaca.com/ KIRA Webinar Series: https://www.eventbrite.com/e/how-to-fix-the-first-step-in-hiring-to-drive-retention-introducing-kira-tickets-1853418256899 For advisory work and marketing inquiries: Bob Pulver:⁠⁠ ⁠https://linkedin.com/in/bobpulver⁠⁠⁠ Elevate Your AIQ:⁠⁠ ⁠https://elevateyouraiq.com⁠⁠⁠ Substack: ⁠https://elevateyouraiq.substack.com⁠
    Voir plus Voir moins
    55 min
  • Ep 93: Strengthening Human Connection to Build Trust in AI-Fueled Transformation with Dan Riley
    Nov 14 2025
    Bob Pulver talks with Dan Riley, CEO and Co-founder of RADICL, about reshaping work through connection, trust, and clarity. From his roots as a punk rock musician to building Modern Survey and RADICL, Dan shares how creativity, curiosity, and courage fuel his leadership philosophy. Together, they explore the balance between human imperfection and technological advancement, why “high tech” must still serve human needs, and how organizations can build cultures that learn, listen, and adapt. The discussion spans themes of AI strategy, responsible design, employee listening, and the enduring value of genuine human connection. KeywordsDan Riley, RADICL, Modern Survey, Aon, employee listening, people analytics, connection, trust, AI ethics, human-AI collaboration, imperfection, curiosity, creativity, collective intelligence, organizational network analysis, people analytics world, Unleash, Transform, learning culture, human connection, responsible AI Takeaways Imperfection is a defining strength of humanity — and the source of creativity and innovation. The best technology solves real human problems in the flow of work, not just productivity gaps. AI is a mirror, amplifying human intent and behavior; if we lead with empathy and ethics, AI learns from that. Clarity, communication, and transparency are critical to avoiding “AI chaos” inside organizations. Continuous listening and connection are the new foundations for engagement and trust. Curiosity and conversation are essential skills for navigating the fast-moving future of work. The most effective teams balance diverse strengths rather than relying solely on “rock stars.” True progress happens when we keep the human conversation going — across roles, hierarchies, and perspectives. Quotes “I define myself as an artist first — a musician, filmmaker, who randomly fell into HR and tech.” “The most beautiful part about being human is that we’re imperfect — that’s where the best ideas come from.” “AI doesn’t fix our flaws; it amplifies them. It’s a mirror of how we show up.” “For technology to work, it has to be solving a human problem in the flow, not just adding to the stack.” “It’s okay to say, ‘We don’t have it all figured out yet’ — just be transparent about where you are.” “You’ll never regret having a conversation about something important.” Chapters 00:03 – Welcome and Dan’s background: from punk rock to HR tech 01:45 – Founding Modern Survey and RADICL’s mission around trust and impact 05:14 – The changing landscape of work 06:42 – Highlights from People Analytics World, Transform, and Unleash 09:50 – Rise of human connection as the dominant theme in work tech 13:10 – Clarity, communication, and the need for an AI strategy 16:19 – Productivity, balance, and reinvesting in people 18:36 – The risk of over-automation and the value of learning 22:16 – Teaching curiosity and critical thinking in an AI world 27:25 – Why open conversations about AI matter more than ever 33:51 – Employee listening, continuous dialogue, and the evolution of engagement 37:22 – How AI enhances understanding and connection between teams 40:06 – Organizational network analysis and adaptive learning 43:21 – Connection, mentorship, and collective intelligence 46:03 – AI as a mirror: amplification of human behavior and bias 48:36 – Building balanced, imperfect, and effective teams 51:48 – Tools, curiosity, and the limits of generative AI 55:35 – Trusting your judgment and maintaining critical thinking 56:34 – Staying human amid synthetic connection 57:45 – Closing reflections and the call for ongoing dialogue Dan Riley: https://www.linkedin.com/in/dan-riley-57b9431 RADICL: http://www.radiclwork.com For advisory work and marketing inquiries: Bob Pulver:⁠⁠ ⁠https://linkedin.com/in/bobpulver⁠⁠⁠ Elevate Your AIQ:⁠⁠ ⁠https://elevateyouraiq.com⁠⁠⁠ Substack: ⁠https://elevateyouraiq.substack.com⁠
    Voir plus Voir moins
    57 min
  • Ep 92: Appreciating the Importance of Self-Awareness to Human-AI Collaboration with Brad Topliff
    Nov 7 2025
    Bob Pulver talks with creative technologist and entrepreneur Brad Topliff about building more human-centered systems for the AI era. Brad reflects on his nonlinear career—from early work in design and user experience, to many years at data and analytics company TIBCO, to his latest venture, SelfActual, which helps people and teams cultivate self-awareness, strengths, and alignment. Together, Bob and Brad explore the intersections of identity, trust, data ownership, and imagination in the workplace, and how understanding ourselves better can make AI more supportive—not more invasive. The conversation bridges psychology, technology, and ethics to imagine a future of work where humans remain firmly in control of their data, choices, and growth. Keywords Brad Topliff, SelfActual, TIBCO, self-awareness, positive psychology, data ownership, digital identity, AI ethics, imagination, human-centric design, trust, internal mobility, talent data, distributed identity, psychological safety, future of work Takeaways Self-awareness is foundational to effective teams and ethical AI use. Personal data about strengths and values should be owned by the individual, not the employer. AI can serve as a mirror and reframing tool, helping people build perspective—not replace human judgment. Internal mobility and growth depend on psychological safety and discretion around what employees share. Positive psychology and imagination can help teams align without reducing people to static personality types. The next era of HR tech should prioritize trust, transparency, and consent in how personal data is used. True human readiness for AI means combining durable human skills with thoughtful technology design. Quotes “I became a translator between the arts, the engineers, and leadership—and that’s carried through everything I’ve done.” “When you create data about yourself, who owns it? You? Your organization? The answer matters for trust.” “Most people think they’re self-aware—but only about twelve percent actually are.” “A job interview is two people sitting across the table from each other lying. We both present what we think the other wants to hear.” “If you give people autonomy and psychological safety, they’ll show up more fully as themselves.” “In the presence of trust, you don’t need security.” Chapters 00:03 – Welcome and Brad’s background in design, Apple roots, and TIBCO experience 05:46 – From UX to data: connecting human insight with enterprise technology 07:48 – Self-awareness, ownership of personal data, and building SelfActual 11:00 – The tension between authenticity, masking, and “bringing your whole self” to work 18:19 – Digital credentials, resumes, and rethinking candidate data ownership 23:08 – Internal mobility, verifiable credentials, and distributed identity 32:51 – Broad skills vs. specialization and the role of AI in talent matching 34:48 – Self-awareness, imagination, and positive psychology at work 46:48 – Rethinking internal mobility and autonomy for well-being and growth 49:26 – Human-centric AI readiness and the limits of automation 58:40 – Trust, security, and ownership of data in organizational AI systems 01:02:37 – Reflections on digital twins, imagination, and collective intelligence 01:08:06 – Closing thoughts and Self Actual’s human-first approach Brad Topliff: https://www.linkedin.com/in/bradtopliff SelfActual: https://selfactual.ai For advisory work and marketing inquiries: Bob Pulver:⁠⁠ ⁠https://linkedin.com/in/bobpulver⁠⁠⁠ Elevate Your AIQ:⁠⁠ ⁠https://elevateyouraiq.com⁠⁠⁠ Substack: ⁠https://elevateyouraiq.substack.com⁠
    Voir plus Voir moins
    1 h et 3 min
  • Ep 91: Evolving Candidate Engagement from Conversational AI to Hiring Intelligence with Prem Kumar
    Oct 31 2025
    Bob Pulver speaks with Prem Kumar, CEO and Co-founder of Humanly.io, about the evolution of hiring technology and the company's transition from a conversational AI tool to a full-fledged AI-powered hiring platform. Prem discusses the impact of Humanly’s recent acquisitions, expansion into post-hire engagement, and how they help employers address challenges in both high-volume and knowledge worker recruiting. Prem emphasizes the need for responsible, inclusive, and human-centric AI design, and explains how Humanly is helping organizations speed up hiring without sacrificing quality, fairness, or candidate experience. Keywords Humanly, conversational AI, AI interviewing, responsible AI, candidate experience, recruiting automation, employee engagement, AI acquisitions, ethics, RecFest, quality of hire, neurodiversity, candidate feedback, interview intelligence, AI coach, sourcing automation Takeaways Humanly’s evolution includes three strategic acquisitions that expand its platform from candidate screening to post-hire engagement. The company’s mission is to help employers talk to 100% of their applicants—not just the 5% that typically make it through—and reduce time-to-hire. Prem highlights how AI can reduce ghosting by creating 24/7 availability and real-time Q&A touchpoints for candidates. Interview feedback tools and coaching features are being developed for both candidates and recruiters. The importance of AI workflow integration is critical—tools must operate within a recruiter’s day-to-day flow to be effective. Humanly’s platform helps uncover quality-of-hire insights by connecting interview behaviors with long-term employee outcomes. The need for third-party AI audits and ethical guardrails. Insights from diverse candidate populations—including neurodiverse candidates and early-career talent—are shaping Humanly’s inclusive design practices. Quotes “It’s not human vs. AI—it’s AI vs. being ignored.” “Our goal is to reduce time-to-hire without compromising quality or fairness.” “We’re obsessed with the problem, not just the solution. That’s what keeps us grounded as we scale.” “Responsible AI should be audited just like SOC 2 or ISO—trust is foundational in hiring.” “The best interview for one role won’t be the same for another. That’s where personalization and learning matter.” “Everything we’ve done to improve access for neurodiverse candidates has made the experience better for everyone.” Chapters 00:00 – Intro and Prem’s Background 01:00 – Humanly's Origins and the Candidate Experience Gap 03:00 – 2025 Growth, Funding, and Acquisition Strategy 05:15 – From Conversational AI to Full-Funnel Hiring Platform 06:30 – High-Volume and Knowledge Workers 08:00 – Combating Ghosting and Delays with AI Speed 10:30 – Candidate Support and Interview Feedback 12:00 – Creating a 24/7 Conversational Layer for Applicants 13:45 – Data-Driven Hiring and Candidate Self-Selection 15:00 – Interview Coaching and Practice Tools 17:00 – Acquisitions and Platform Consolidation Feedback 18:45 – Responsible AI and Third-Party Auditing 21:00 – Partnering with Values-Aligned Teams and Investors 22:00 – Measuring Candidate Experience Across All Interactions 24:00 – Connecting Interview Behavior to Quality of Hire 26:00 – Coaching Recruiters and Interview Intelligence 28:45 – Expanding Into Post-Hire and Internal Conversations 30:00 – The Future of AI in HR and Internal Use Cases 34:00 – Designing Inclusively for Diverse Candidate Needs 36:00 – Modalities, Accessibility, and Equity in Interviewing 39:00 – Generative AI Reflections and Everyday Use 42:00 – Wrapping Up: What's Next for Humanly Prem Kumar: https://www.linkedin.com/in/premskumar Humanly: https://humanly.io For advisory work and marketing inquiries: Bob Pulver:⁠⁠ ⁠https://linkedin.com/in/bobpulver⁠⁠⁠ Elevate Your AIQ:⁠⁠ ⁠https://elevateyouraiq.com⁠⁠⁠ Substack: ⁠https://elevateyouraiq.substack.com⁠
    Voir plus Voir moins
    45 min