• @cyd@lemmy.world
    link
    fedilink
    English
    1811 months ago

    Strange that they don’t just use an open weights model; there are several now that surpass ChatGPT 3.5, which is probably good enough for what they need.

    • FaceDeer
      link
      fedilink
      1511 months ago

      Might be that they started training before those open models were available. Or they were just lazy and OpenAI’s API was easier.

      • @cyd@lemmy.world
        link
        fedilink
        English
        611 months ago

        Mistral 7B and deepseek-ai are two open-weight models that surpass 3.5, though not 4, on several measures.

      • @4onen@lemmy.world
        link
        fedilink
        English
        311 months ago

        Mixtral 8x7B, just out. Codes better than ChatGPT in the few prompts I’ve done so far, and I can run it at 2 to 3 tokens per second on my GPU-less laptop.