Page de couverture de #19 Gabe Alfour on why AI alignment is hard, what it would mean to solve it & what ordinary people can do about existential risk

#19 Gabe Alfour on why AI alignment is hard, what it would mean to solve it & what ordinary people can do about existential risk

#19 Gabe Alfour on why AI alignment is hard, what it would mean to solve it & what ordinary people can do about existential risk

Écouter gratuitement

Voir les détails du balado

À propos de cet audio

Gabe Alfour is a co-founder of Conjecture and an advisor to Control AI, both organisations working to reduce risks from advanced AI.

We discussed why AI poses an existential risk to humanity, what makes this problem very hard to solve, why Gabe believes we need to prevent the development of superintelligence for at least the next two decades, and more.

Follow Gabe on Twitter

Read The Compendium and A Narrow Path





Ce que les auditeurs disent de #19 Gabe Alfour on why AI alignment is hard, what it would mean to solve it & what ordinary people can do about existential risk

Moyenne des évaluations de clients

Évaluations – Cliquez sur les onglets pour changer la source des évaluations.