• @kromem@lemmy.world
    link
    fedilink
    English
    3
    edit-2
    7 months ago

    It’s not as good as it seems at the surface.

    It is a model squarely in the “fancy autocomplete” category along with GPT-3 and fails miserably at variations of logic puzzles in ways other contemporary models do not.

    It seems that the larger training data set allows for better modeling around the fancy autocomplete parts, but even other similarly sized models like Mistral appear to have developed better underlying critical thinking capacities when you scratch below the surface that are absent here.

    I don’t think it’s a coincidence that Meta’s lead AI researcher is one of the loudest voices criticizing the views around emergent capabilities. There seems to be a degree of self-fulfilling prophecy going on. A lot of useful learnings in the creation of Llama 3, but once other models (i.e. Mistral) also start using extended training my guess is that any apparent advantages to Llama 3 right now are going to go out the window.