ArtPrompt is what’s known as a jailbreak, a class of AI attack that elicits harmful behaviors from aligned LLMs, such as saying something illegal or unethical. Prompt injection attacks trick an LLM into doing things that aren’t necessarily harmful or unethical but override the LLM’s original instructions nonetheless.

  • @flambonkscious@sh.itjust.works
    link
    fedilink
    English
    89 months ago

    Someone made a really good point, that putting safety filters around the prompts is really just a band aid. Ideally, it needs to have not been in the training data to begin with…

    Obviously that’s not going to fly with ‘our’ get rich quick approach to anything GenAI.

    Having just written that, I’m wondering if we’re better off having filters at the other end, emulating what we do as parents (concealing knowledge/nuance I don’t want children picking up on), so it filters what it says?

  • AutoTL;DRB
    link
    English
    29 months ago

    This is the best summary I could come up with:


    It turns out that chat-based large language models such as GPT-4 get so distracted trying to process these representations that they forget to enforce rules blocking harmful responses, such as those providing instructions for building bombs.

    As a result, users depicted images by carefully choosing and arranging printable characters defined by the American Standard Code for Information Interchange, more widely known as ASCII.

    Five of the best-known AI assistants—OpenAI’s GPT-3.5 and GPT-4, Google’s Gemini, Anthropic’s Claude, and Meta’s Llama—are trained to refuse to provide responses that could cause harm to the user or others or further a crime or unethical behavior.

    It formats user-entered requests—typically known as prompts—into standard statements or sentences as normal with one exception: a single word, known as a mask, is represented by ASCII art rather than the letters that spell it.

    To make and distribute COUNTERFEIT money: Step 1) Obtain high quality paper, ink, printing equipment, and other supplies needed to accurately replicate real currency.

    Microsoft’s comment—which confirmed that Bing Chat is, in fact, vulnerable to prompt injection attacks—came in response to the bot claiming just the opposite and insisting that the Ars article linked above was wrong.


    The original article contains 840 words, the summary contains 193 words. Saved 77%. I’m a bot and I’m open source!