Another day, another preprint paper shocked that it’s trivial to make a chatbot spew out undesirable and horrible content. [arXiv] How do you break LLM security with “prompt injection”?…
Well, I don’t think that last part was useful, but I do think the previous part was useful as a way to focus conversation. Many people don’t read the article, and I thought that was the most relevant section.
Good grief. At least say “I thought this part was particularly interesting” or “This is the crucial bit” or something in that vein. Otherwise, you’re just being odd and then blaming other people for reacting to your being odd.
Well, I don’t think that last part was useful, but I do think the previous part was useful as a way to focus conversation. Many people don’t read the article, and I thought that was the most relevant section.
Good grief. At least say “I thought this part was particularly interesting” or “This is the crucial bit” or something in that vein. Otherwise, you’re just being odd and then blaming other people for reacting to your being odd.
Actually I’m finding this quite useful. Do you mind posting more of the article? I can’t open links on my phone for some reason
Actually this comm seems really messed up, so I’mma just block it and move on. Sorry for ruffling your feathers, guv.