ArtPrompt is what’s known as a jailbreak, a class of AI attack that elicits harmful behaviors from aligned LLMs, such as saying something illegal or unethical. Prompt injection attacks trick an LLM into doing things that aren’t necessarily harmful or unethical but override the LLM’s original instructions nonetheless.

  • @flambonkscious@sh.itjust.works
    link
    fedilink
    English
    88 months ago

    Someone made a really good point, that putting safety filters around the prompts is really just a band aid. Ideally, it needs to have not been in the training data to begin with…

    Obviously that’s not going to fly with ‘our’ get rich quick approach to anything GenAI.

    Having just written that, I’m wondering if we’re better off having filters at the other end, emulating what we do as parents (concealing knowledge/nuance I don’t want children picking up on), so it filters what it says?