Another day, another preprint paper shocked that it’s trivial to make a chatbot spew out undesirable and horrible content. [arXiv] How do you break LLM security with “prompt injection”?…
even though I get the idea you’re trying to go for, really fucking ick way to make your argument starting from “nonhuman entities” and then literally immediately mentioning enslaving black folks as the first example of bad behaviour
as to cautious erring: that still leaves you in the position of being used as a useful idiot
even though I get the idea you’re trying to go for, really fucking ick way to make your argument starting from “nonhuman entities” and then literally immediately mentioning enslaving black folks as the first example of bad behaviour
as to cautious erring: that still leaves you in the position of being used as a useful idiot