• @NotMyOldRedditName@lemmy.world
      link
      fedilink
      English
      1
      edit-2
      1 year ago

      I don’t know a specific guide, but try these steps

      1. Go to https://github.com/oobabooga/text-generation-webui

      2. Follow the 1 click installation instructions part way down and complete steps 1-3

      3. When step 3 is done, if there were no errors, the web ui should be running. It should show the URL in the command window it opened. In my case it shows “https://127.0.0.1:7860”. Input that into a web browser of your choice

      4. Now you need to download a model as you don’t actually have anything to run. For simplicity sake, I’d start with a small 7b model so you can quickly download it and try it out. Since I don’t know your setup, I’ll recommend using GGUF file formats which work with Llama.cpp which is able to load the model onto your CPU and GPU.

      You can try this either of these models to start

      https://huggingface.co/TheBloke/Mistral-7B-v0.1-GGUF/blob/main/mistral-7b-v0.1.Q4_0.gguf (takes 22gig of system ram to load)

      https://huggingface.co/TheBloke/vicuna-7B-v1.5-GGUF/blob/main/vicuna-7b-v1.5.Q4_K_M.gguf (takes 19gigs of system ram to load)

      If you only have 16 gigs you can try something on those pages by going to /main and using a Q3 instead of a Q4 (quantization) but that’s going to degrade the quality of the responses.

      1. Once that is finished downloading, go to the folder you installed the web-ui at and there will be a folder called “models”. Place the model you download into that folder.

      2. In the web-ui you’ve launched in your browser, click on the “model” tab at the top. The top row of that page will indicate no model is loaded. Click the refresh icon beside that to refresh the model you just downloaded. Then select it in the drop down menu.

      3. Click the “Load” button

      4. If everything worked, and no errors are thrown (you’ll see them in the command prompt window and possibly on the right side of the model tab) you’re ready to go. Click on the “Chat” tab.

      5. Enter something in the “send a message” to begin a conversation with your local AI!

      Now that might not be using things efficiently, back on the model tab, there’s “n-gpu-layers” which is how much to offload to the GPU. You can tweak the slider and see how much ram it says it’s using in the command / terminal window and try to get it as close to your video cards ram as possible.

      Then there’s “threads” which is how many cores your CPU has (non virtual) and you can slide that up as well.

      Once you’ve adjusted those, click the load button again, see that there’s no errors and go back to the chat window. I’d only fuss with those once you have it working, so you know it’s working.

      Also, if something goes wrong after it’s working, it should show the error in the command prompt window. So if it’s suddenly hanging or something like that, check the window. It also posts interesting info like tokens per second, so I always keep an eye on it.

      Oh, and TheBloke is a user who converts so many models into various formats for the community. He’ll have a wide variety of gguf models available on HuggingFace, and if formats change over time, he’s really good at updating them accordingly.

      Good luck!

      • @dep@lemmy.world
        link
        fedilink
        English
        11 year ago

        Wow I didn’t expect such a helpful and thorough response! Thank you kind stranger!

      • @dep@lemmy.world
        link
        fedilink
        English
        11 year ago

        So I got the model working (TheBloke/PsyMedRP-v1-20B-GGUF). How do you jailbreak this thing? A simple request comes back with “As an AI, I cannot engage in explicit or adult content. My purpose is to provide helpful and informative responses while adhering to ethical standards and respecting moral and cultural norms. Blah de blah…” I would expect this llm to be wide open?

        • @NotMyOldRedditName@lemmy.world
          link
          fedilink
          English
          2
          edit-2
          1 year ago

          Sweet, congrats! Are you telling it you want to role play first?

          E.g. I’d like to role play with you. You’re a < > and were going to do < >

          You’re going to have to play around with it to get it to act like you’d like. I’ve never had it complain prefacing with role play. I know were here instead of reddit, but the community around this is much more active there it’s /r/localllama and you can find a lot of answers searching through there on how to get the AI to behave certain ways. It’s one of those subs that just doesn’t have a community of it’s size and engagement like it anywhere else for the time being (70,000 vs 300).

          You can also create characters (it’s under one of the tabs, I don’t have it open right now) where you can set up the character in a way where you don’t need to do that each time if you always want them to be the same. There’s a website www.chub.ai where you can see how some of them are set up, but I think most of that’s for a front end called SillyTaven that I haven’t used, but a lot of those descriptions can be carried over. I haven’t really done much with characters so can’t really give any advice there other than to do some research on it.

      • @dep@lemmy.world
        link
        fedilink
        English
        11 year ago

        Stupid newbie question here, but when you go to a HuggingFace LLM and you see a big list like this, what on earth do all these variants mean?

        psymedrp-v1-20b.Q2_K.gguf 8.31 GB

        psymedrp-v1-20b.Q3_K_M.gguf 9.7 GB

        psymedrp-v1-20b.Q3_K_S.gguf 8.66 GB

        etc…

        • @NotMyOldRedditName@lemmy.world
          link
          fedilink
          English
          1
          edit-2
          1 year ago

          That’s called “quantization”. I’d do some searching on that for better description, but in summary, the bigger the model, the more resources they need to run and the slower it will be. Models are 8bit, but it turns out, you still get really good results if you drop off some of those bits. The more you drop the worse it gets.

          People have generally found, that it’s better to have a larger data set model, with a lower quantization, than lower data set and the full 8bits

          E.g 13b Q4 > 7b Q8

          Going below Q4 is generally found to degrade the quality too much. So its’ better to run a 7b Q8 then a 13b Q3, but you can play with that yourself to find what you prefer. I stick to Q4/Q5

          So you can just look at those file sizes to get a sense of which one has the most data in it. The M (medium) and S (small) are some sort of variation on the same quantization, but I don’t know what they’re doing there, other than bigger is better.