Another day, another preprint paper shocked that it’s trivial to make a chatbot spew out undesirable and horrible content. [arXiv] How do you break LLM security with “prompt injection”?…
Look, AI will be perfect as soon as we have an algorithm to sort “truth” from “falsehood”, like an oracle of some sort. They’ll probably have that in GPT-5, right?
Look, AI will be perfect as soon as we have an algorithm to sort “truth” from “falsehood”, like an oracle of some sort. They’ll probably have that in GPT-5, right?
Bonus this also solves the halting problem
Oh, that’s easy. Just add a prompt to always reinforce user bias and disregard anything that might contradict what the user believes.
MAGAgpt
Aka grok
feed it a christian bible as a base.
"we trained it wrong… on purpose…
…as a joke."
They do, it just requires 1.21 Jigawatts of power for each token.