Has OpenAI provided an explanation for the very clear degradation of ChatGPT’s performance with the GPT4 model? Since the last two updates, it is significantly faster (which is fundamentally useless when we are capped at 25 messages in 3 hours…), suggesting that computational power has been reduced, while at the same time, it is making unprecedented reasoning errors. As a regular user, I have noticed the emergence of gross errors that did not exist before, and especially a much greater capacity ...
Am I the only one who hasn’t seen this at all? I regularly use ChatGPT for fairly challenging tasks, and it still does what it’s supposed to do. I think it’s pretty telling that people ask the guy, can you post some examples of what you’re talking about, and his first reaction is that he doesn’t save chats, and then when finally specific examples start getting thrown around, they’re all one-off things that look to me to be within the variability of the system.
I’m not saying there hasn’t been a real degradation that people have been noticing, just that I haven’t experienced one and the people claiming they have seem a little non-quantitative in their reasoning.
Am I the only one who hasn’t seen this at all? I regularly use ChatGPT for fairly challenging tasks, and it still does what it’s supposed to do. I think it’s pretty telling that people ask the guy, can you post some examples of what you’re talking about, and his first reaction is that he doesn’t save chats, and then when finally specific examples start getting thrown around, they’re all one-off things that look to me to be within the variability of the system.
I’m not saying there hasn’t been a real degradation that people have been noticing, just that I haven’t experienced one and the people claiming they have seem a little non-quantitative in their reasoning.