
2024 E4 Digesting The Data
Échec de l'ajout au panier.
Veuillez réessayer plus tard
Échec de l'ajout à la liste d'envies.
Veuillez réessayer plus tard
Échec de la suppression de la liste d’envies.
Veuillez réessayer plus tard
Échec du suivi du balado
Ne plus suivre le balado a échoué
-
Narrateur(s):
-
Auteur(s):
À propos de cet audio
Dónal and Ciarán discuss the vast ocean of data that Large Language Models (LLMs) depend on for their training, covering some of the issues of access to that data and the biases reflected within it. This episode should help you better understand some aspects of the AI training process.
Topics in this episode
- What data is being used to train models like ChatGPT?
- What are "supervised" or "unsupervised" machine learning methods?
- How have the owners of copyright data, like news organisations, reacted to the use of their text?
- What issues of bias arise in training models based on existing text?
- What happens when AI models train on AI output?
- How do we morally and ethically align the actions of AI models, as part of their training?
You can get in touch with us - hello@enoughaboutai.com - where we'd love to hear your questions, comments or suggestions!
Ce que les auditeurs disent de 2024 E4 Digesting The Data
Moyenne des évaluations de clientsÉvaluations – Cliquez sur les onglets pour changer la source des évaluations.
Il n'y a pas encore de critiques pour ce titre.