OFFRE D'UNE DURÉE LIMITÉE | Obtenez 3 mois à 0.99 $ par mois

14.95 $/mois par la suite. Des conditions s'appliquent.
Page de couverture de starkly

starkly

starkly

Auteur(s): Ni'coel Stark
Écouter gratuitement

À propos de cet audio

Rooted in 4IR & Society 5.0's human-centered frame, Starkly trains the musculature of analysis and of capacity that decision making requires. Each episode slows automated thinking so society & technology serve, not steer, our judgment. For future leaders & the promiscuously curious, Ni'coel maps nuance, surfaces tacit and embodied data, and tests assumptions. She advances a pedagogy of Human Decision Intelligence, integrating skills & addressing capacity in practice. Unlearn tidy myths and make exacting decisions that bridge human & machine intelligence. https://humandecisionintelligence.comNi'coel Stark Philosophie Sciences sociales
Épisodes
  • Artificial Wisdom
    Dec 30 2025

    Technology is neither salvation nor threat on its own, it is an amplifier. Artificial intelligence can imitate language, pattern, and human systems, but imitation is not understanding, and acceleration is not discernment. In this episode, Ni’coel is joined by responsible-AI and international data-science leader Ricardo Baeza-Yates where they distinguish wisdom as Human Decision Intelligence: the capacity for reflective judgment in conditions of uncertainty, plurality, and consequence.

    They explore how contemporary tools often remove the very conditions humans need to grow. They look at the rising tendency to outsource judgment, the difference between alignment and integrity; the risks of proxy data, and the widening divide between humans who develop sapience and those who surrender agency to machines.

    🔸Explore the global learning hub: humandecisionintelligence.com

    🔸Cohost: Ricardo Baeza-Yates is a a part-time WASP Professor at KTH Royal Institute of Technology in Stockholm, as well as part-time professor at the departments of Engineering of Universitat Pompeu Fabra in Barcelona and Computing Science of University of Chile in Santiago. He has been Director of Research at the Institute for Experiential AI of Northeastern University (2021-25) and VP of Research at Yahoo Labs (2006-16). He is a member of the AI Technology Policy committees of GPAI/OECD, ACM and IEEE. He is co-author of the best-seller Modern Information Retrieval textbook that won the ASIST 2012 Book of the Year award. He has won national scientific awards in Chile (2024) and Spain (2018). He has a Ph.D. in CS and his areas of expertise are responsible AI, web search and data mining plus data science and algorithms in general.

    Voir plus Voir moins
    36 min
  • Moral Imagination
    Dec 15 2025

    Ni’coel and Minh Do explore moral imagination as a core sapient capacity, one that expands the option set before optimization. The episode frames moral imagination as discernment plus analogical reasoning functioning as a pre-decision engine within Human Decision Intelligence. Moral imagination enables people to anticipate harm, identify outcomes worth scaling, and resist the drift toward automated thinking. Ni’coel points out how contemporary systems reward speed over reflection, measurable metrics over meaning, and selection over invention, leaving society to efficiently optimize the wrong things. The episode argues that cultivating moral imagination is no longer optional: it’s one of the essential antidotes to mental and emotional atrophy in an accelerating machine-driven era.

    🔸Explore the global learning hub: humandecisionintelligence.com

    🔸Cohost: Minh Do is an entrepreneur, filmmaker, and speaker. He is the co-founder of Machine Cinema, a collective focused on AI and emerging tech in film and art, and Fantastic Day, where he is cofounder and head of AI, working with musicians, brands, and filmmakers to produce AI content. Minh is also a producer at Fairground.tv, an AI FAST channel with the goal of producing a 24/7 slate of AI content to distribute globally.

    Drawing from his diverse background as a former VC, journalist, musician, and teacher. Minh is curious about how AI will transform entertainment and how AI will challenge our understanding of consciousness, and in particular, where does Zen Buddhism and AI intersect.

    Minh is in Creator Partner Programs for Google Labs, Sora, ChatGPT 4o Image, Pika, Luma Dream Machine, Quander, and more allowing him to play, teach, and showcase with the cutting edge of AI image and video generation.

    Voir plus Voir moins
    30 min
  • Starkly: Conversations in Human Decision Intelligence — Intro
    Sep 28 2025

    Starkly is a conversation series in Human Decision Intelligence. We slow automated thinking so society and technology serve, not steer human judgment.

    Context. Now, in the Fourth Industrial Revolution, society outsources not only tasks but judgment. Certain social and technological conditions quietly degrade thinking and decision-making. Starkly exists to slow automated thinking so human judgment leads.

    What HDI is. Ni'coel's pedagogy and framework, not therapy, not productivity hacks, not change-management.

    What breaks today. Legacy KPIs/OKRs optimize machine-legible outputs (throughput/compliance). Forcing sapience into those yardsticks too early creates category error and brittleness.

    What we practice.

    • Discernment (precision of perception)

    • Moral imagination (future-consequences with human context)

    • Xenopathy (increasing tolerance for our ignorance and anxiety around other)

    • Metacognition (watching how we think while we think)

    • Analogical reasoning (fit by ontology, not label)

    • Foresight (long-horizon consequence scanning)

    • Working in liminality and existential math (holding uncertainty and doing advanced inter-domain analysis)

    • Cathedral thinking (decisions that compound across long horizons)

    • Responsive tempo (speed calibrated to reality, not dashboards)

    • Spectatorship → Participation (agency recovery)

    How outcomes change. We repattern perception and decision pipelines so good choices become native under pressure (anxiety). We reduce projection errors, shorten repair cycles, and improve long-horizon bet quality.

    Human-legible indicators we track

    • Decision latency (from reflex to optimal)

    • Rework / repair cycles (cycle count and depth)

    • Projection error rate (as surfaced in post debriefs)

    • Ambiguity tolerance (measured in anxiety levels)

    • Relational repair rate (conflict → closure cadence)

    Principle: stabilize in humans before any machine instrumentation.

    Explore the global learning hub:humandecisionintelligence.com

    Voir plus Voir moins
    5 min
Pas encore de commentaire