Another day, another preprint paper shocked that it’s trivial to make a chatbot spew out undesirable and horrible content. [arXiv] How do you break LLM security with “prompt injection”?…
To be fair, I’m skeptical of the idea that humans have minds or perform cognition outside of what’s known to neuroscience. We could stand to be less chauvinist and exceptionalist about humanity. Chatbots suck but that doesn’t mean humans are good.
To be fair, I’m skeptical of the idea that humans have minds or perform cognition outside of what’s known to neuroscience. We could stand to be less chauvinist and exceptionalist about humanity. Chatbots suck but that doesn’t mean humans are good.
mayhaps, but then it’s also to be said that people who act like the phrase was “cogito ergo dim sum” also don’t exactly aim for a high bar