
All Noise, No Signal
Échec de l'ajout au panier.
Échec de l'ajout à la liste d'envies.
Échec de la suppression de la liste d’envies.
Échec du suivi du balado
Ne plus suivre le balado a échoué
-
Narrateur(s):
-
Auteur(s):
À propos de cet audio
We all know political polls are increasingly unreliable. That's why forecasting outfits like 538 aim to separate "the signal and the noise" by assigning grades to pollsters and weighting their forecasts toward those with the best grades.
It seemed like a good plan. So why did it backfire so spectacularly?
Flip Pidot, Peter Hurford and Harry Crane investigate Nate Silver's utterly failed attempt to distinguish good pollsters from bad.
Follow along with our interactive polls and forecasts at OpenModelProject.org. Follow us on Twitter at @OpenModelProj.
If you'd like to see more independent forecasting and unbiased polling in your world (and gain exclusive access it before anyone else), please consider supporting us on Patreon at patreon.com/openmodel.