• 257: Who Let the LLamas Out? *Bleat Bleat*

  • May 1 2024
  • Length: 1 hr and 2 mins
  • Podcast
257: Who Let the LLamas Out? *Bleat Bleat* cover art

257: Who Let the LLamas Out? *Bleat Bleat*

  • Summary

  • Welcome to episode 257 of the Cloud Pod podcast – where the forecast is always cloudy! This week your hosts Justin, Matthew, Ryan, and Jonathan are in the barnyard bringing you the latest news, which this week is really just Meta’s release of Llama 3. Seriously. That’s every announcement this week. Don’t say we didn’t warn you.

    Titles we almost went with this week:
    • Meta Llama says no Drama
    • No Meta Prob-llama
    • Keep Calm and Llama on
    • Redis did not embrace the Llama MK
    • The bedrock of good AI is built on Llamas
    • The CloudPod announces support for Llama3 since everyone else was doing it
    • Llama3, better know as Llama Llama Llama
    • The Cloud Pod now known as the LLMPod
    • Cloud Pod is considering changing its name to LlamaPod
    • Unlike WinAMP nothing whips the llamas ass
    A big thanks to this week’s sponsor: Check out Sonrai Securities‘ new Cloud Permission Firewall. Just for our listeners, enjoy a 14 day trial at www.sonrai.co/cloudpod Follow Up

    01:27 Valkey is Rapidly Overtaking Redis

    • Valkey has continued to rack up support from AWS, Ericsson, Google, Oracle and Verizon initially, to now being joined by Alibaba, Aiven, Heroku and Percona backing Valkey as well.
    • Numerous blog posts have come out touting Valkey adoption.
    • I’m not sure this whole thing is working out as well as Redis CEO Rowan Trollope had hoped.
    AI Is Going Great – Or How AI Makes All It’s Money

    03:26 Introducing Meta Llama 3: The most capable openly available LLM to date

    • Meta has launched Llama 3, the next generation of their state-of-the-art open source large language model.
    • Llama 3 will be available on AWS, Databricks, GCP, Hugging Face, Kaggle, IBM WatsonX, Microsoft Azure, Nvidia NIM, and Snowflake with support from hardware platforms offered by AMD, AWS, Dell, Intel, Nvidia and Qualcomm
    • Includes new trust and safety tools such as Llama Guard 2, Code Shield and Cybersec eval 2
    • They plan to introduce new capabilities, including longer context windows, additional model sizes and enhanced performance.
    • The first two models from Meta Lama3 are the 8B and 70B parameter variants that can support a broad range of use cases.
    • Meta shared some benchmarks against Gemma 7B and Mistral 7B vs the Lama 3 8B models and showed improvements across all major benchmarks. Including Math with Gemma 7b doing 12.2 vs 30 with Llama 3
    • It had highly comparable performance with the 70B model against Gemini Pro 1.5 and Claude 3 Sonnet scoring within a few points of most of the other scores.
    • Jonathan recommends using LM Studio to get start playing around with LLMS, which you can find at https://lmstudio.ai/

    04:42 Jonathan – “Isn’t it funny how you go from an 8 billion parameter model to a 70 billion parameter model but nothing in between? Like you would have thought there would be some kind of like, some middle ground maybe? But, uh, but… No. But, um, I’ve been playing with the, um, 8 billion parameter model at home and it’s absolutely amazing. It blows everything else out of the water that IR

    Show more Show less

What listeners say about 257: Who Let the LLamas Out? *Bleat Bleat*

Average Customer Ratings

Reviews - Please select the tabs below to change the source of reviews.