Why AI Detectors Don't Work for Education
Échec de l'ajout au panier.
Échec de l'ajout à la liste d'envies.
Échec de la suppression de la liste d’envies.
Échec du suivi du balado
Ne plus suivre le balado a échoué
-
Narrateur(s):
-
Auteur(s):
À propos de cet audio
In this episode of Ed-Technical, Libby and Owen explore why traditional AI detection tools are struggling in academic settings. As students adopt increasingly sophisticated methods to evade AI detection - like paraphrasing tools, hybrid writing, and sequential model use - detection accuracy drops and false positives rise. Libby and Owen look at the research showing why reliable detection with automated tools is so difficult, including why watermarking and statistical analysis often fail in real-world conditions.
The conversation shifts toward process-based and live assessments, such as keystroke tracking and oral exams, which offer more dependable ways to evaluate student work. They also discuss the institutional challenges that prevent widespread adoption of these methods, like resource constraints and student resistance. Ultimately, they ask how the conversation about detection could lead towards more meaningful assessment.
Join us on social media:
- BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel)
- Listen to all episodes of Ed-Technical here: https://bold.expert/ed-technical
- Subscribe to BOLD’s newsletter: https://bold.expert/newsletter
- Stay up to date with all the latest research on child development and learning: https://bold.expert
Credits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.