
Episode 019: LLM Evaluation Frameworks
Échec de l'ajout au panier.
Veuillez réessayer plus tard
Échec de l'ajout à la liste d'envies.
Veuillez réessayer plus tard
Échec de la suppression de la liste d’envies.
Veuillez réessayer plus tard
Échec du suivi du balado
Ne plus suivre le balado a échoué
-
Narrateur(s):
-
Auteur(s):
À propos de cet audio
Lots of people like to talk about the importance of prompts, context, and what is sent to an LLM. Few discuss the even more important aspect of an LLM-driven system in evaluating its output.
In this episode, we discuss traditional and modern metrics used to evaluate LLM outputs. And, we review the common frameworks for obtaining that feedback.
Though evals are a lot of work (and easy to do poorly), those building (or buying) LLM-driven systems should be transparent about their process and the current state of their eval framework.
Pas encore de commentaire