• 0 Posts
  • 58 Comments
Joined 24 days ago
cake
Cake day: April 7th, 2025

help-circle












  • vivendi@programming.devtoMicroblog Memes@lemmy.worldGood manners are priceless.
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    3
    ·
    edit-2
    5 days ago

    Inference costs are very, very low. You can run Mistral Small 24B finetunes that are better than GPT-4o and actually quite usable on your own local machine.

    As for training costs, Meta’s LLAMA team displace their emissions with environmental programs, which is more green than 99.9% of any company making any product you use

    TLDR; don’t use ClosedAI use Mistral or other foss projects

    EDIT: I recommend cognitivecomputations Dolphin 3.0 Mistral Small R1 fine tune in particular. I’ve only used it for mathematical workloads in truth, but it has been exceedingly good at my tasks thus far. The training set and the model are both FOSS and uncensored. You’ll need a custom system prompt to activate the Chain of Thought reasoning, and you’ll need a comparatively low temperature to keep the model from creating logic loops for itself (0.1 - 0.4 range should be OK)






  • vivendi@programming.devtoComic Strips@lemmy.worldbased
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    3
    ·
    5 days ago

    Funny how your ideology is to dickride capital, sow disruption in effective leftist systems, and you somehow twist yourself in enough of an ideological pretzel that you somehow also dickride ACAB

    Average .world “ideology shopping” individual

    Also, your name is familiar. Aren’t you the rabid Kamala supporter? If so, that is insanely ironic