Tried my duck river crossing thing a few times recently, it usually solves it now, albeit with a bias to make unnecessary trips half of the time.
Of course, anything new fails:
There’s 2 people and 1 boat on the left side of the river, and 3 boats on the right side of the river. Each boat can accommodate up to 6 people. How do they get all the boats to the left side of the river?
Did they seriously change something just to deal with my duck puzzle? How odd.
It’s Google so it is not out of the question that they might do some analysis on the share links and referring pages, or even use their search engine to find discussions of a problem they’re asked. I need to test that theory and simultaneously feed some garbage to their plagiarism machine…
Sample of the new botshit:
L->R: 2P take B_L. L{}, R{2P, 4B}. R->L: P1 takes B_R1. L{P1, B_R1}, R{P2, 3B}. R->L: P2 takes B_R2. L{2P, B_R1, B_R2}, R{2B}. L->R: P1 takes B_R1 back. L{P2, B_R2}, R{P1, 3B}. R->L: P1 takes B_R3. L{P1, P2, B_R2, B_R3}, R{2B}. L->R: P2 takes B_R2 back. L{P1, B_R3}, R{P2, 3B}.
And again and again, like a buggy attempt at brute forcing the problem.
For a while now I jave wondered how much of those “the llms all fail at this very basic task” problems that suddenly get fixed are not fixed by the model getting better but just a bandaid solution which solves that specific problem. (Putting another llm in front of the input to detect the problem and then sending it to the llm that is trained on that specific problem would be a bandaid solution btw, it is just adding more under the trenchcoat). And even if somebody were to answer this question, the well is so poisoned im not sure if I could believe them.
I have no doubt whatsoever that it is what they’re doing. It’s also how they fix hands and faces in picture generators.