Another day, another preprint paper shocked that it’s trivial to make a chatbot spew out undesirable and horrible content. [arXiv] How do you break LLM security with “prompt injection”?…
lmao really all are equal on awful. ban in three replies for ai boosterism, but not for weird harassment or murder-suicide encouragement, which happened to that user after muchhh longer time elsewhere
lmao really all are equal on awful. ban in three replies for ai boosterism, but not for weird harassment or murder-suicide encouragement, which happened to that user after muchhh longer time elsewhere