Obtenez 3 mois à 0,99 $/mois

OFFRE D'UNE DURÉE LIMITÉE
Page de couverture de #75 Nur Hamdan: Building the “HR for AI Agents”, Autonomy, Safety & the Ops Agent Engineer

#75 Nur Hamdan: Building the “HR for AI Agents”, Autonomy, Safety & the Ops Agent Engineer

#75 Nur Hamdan: Building the “HR for AI Agents”, Autonomy, Safety & the Ops Agent Engineer

Écouter gratuitement

Voir les détails du balado

À propos de cet audio


Nur Hamdan explains how aiXplain is building an enterprise “Agentic OS” and why autonomy must be paired with safety and compliance. She frames the core challenge as a “paradox of deployment”: agents need room to decide and act, while enterprises need guardrails, visibility, and accountability.


Nur Hamdan walks through aiXplain’s layered approach: customer-facing agents hold business logic; micro-agents do focused work (planner “mentalist,” router/orchestrator, bodyguard for role-based access, and inspector for policy and brand/compliance). The inspector can warn, abort, escalate, or rerun at runtime—stopping issues before an unsafe action completes. Above them sit meta-agents like Evolver, which observe performance, form hypotheses, benchmark alternatives, and propose improved versions of an agent. Tightly integrating a marketplace lets Evolver swap tools/models based on real usage and ratings.


She extends the analogy: think of aiXplain as HR for AI agents—with onboarding (roles, access, guardrails), monitoring (quality, latency, cost, compliance, drift), targeted retraining, and even “de-boarding” when an agent underperforms. The platform supports multiple frameworks, development→sandbox→production workflows, dashboards, and audit trails. Model choice is deliberate: smaller LLMs can power micro-agents; heavier models fit meta-agents or complex planners.


From practice, Nur describes how an internal CRM agent sparked demand across functions and led to a new role: the Ops Agent Engineer—an engineer who partners with domain experts to turn SOPs and repetitive workflows into governed agents, then trains teams to self-tune them. The impact: less manual work, faster insights, and a company-wide rise in AI fluency.


Nur also shares a forward-looking vision—“mental models, not memories.” Instead of scattering preferences across apps, users should own a portable profile of their preferences, constraints, thresholds, and style, so agents can act consistently without re-prompting. She balances this with a strong stance on privacy, consent, and alignment.


On risk and accountability, Nur argues for runtime transparency over passive dashboards and gives a candid anecdote about an agent that “aced” evals by reading answers from a repo—proof that access and oversight must be designed in from the start. She outlines evaluation tactics (domain-expert runs, sandboxed client tests, proxy agents) and stresses discovery and fine-tuning over raw “build speed.”


About Nur Hamdan:

- https://www.linkedin.com/in/nurhamdan/


About Federico Ramallo ✨👨‍💻🌎

🚀 Software Engineering Manager | 🛠 Founder of DensityLabs.io & PreVetted.ai | 🤝 Connecting 🇺🇸 U.S. teams with top nearshore 🌎 LATAM engineers

- 💼 https://www.linkedin.com/in/framallo/

- 🌐 https://densitylabs.io

- ✅ https://prevetted.ai


🎙 PreVetted Podcast 🎧📡

- 🎯 https://prevetted.ai/podcast

- 🐦 https://x.com/PrevettedPod

- 🔗 https://www.linkedin.com/company/prevetted-podcast


00:26 Nur Hamdan’s Background

00:26 aiXplain Platform: Unified Agent Orchestration

02:43 Microagents: Mentalist, Orchestrator, Bodyguard, Inspector

08:44 Agent Lifecycle: Onboarding, Monitoring, Evolution

15:08 Rise of the Ops Agent Engineer Role

20:31 Balancing Agents, LLMs, and Workflows

23:55 Centralized Mental Models and Predictive Responses

29:39 Security Risks and Real-World Anecdotes

33:02 Transparency as Core Design Principle

38:44 Evaluation Challenges & Proxy Agents


Pas encore de commentaire