Has OpenAI provided an explanation for the very clear degradation of ChatGPT’s performance with the GPT4 model? Since the last two updates, it is significantly faster (which is fundamentally useless when we are capped at 25 messages in 3 hours…), suggesting that computational power has been reduced, while at the same time, it is making unprecedented reasoning errors. As a regular user, I have noticed the emergence of gross errors that did not exist before, and especially a much greater capacity ...
Step 1: Use machine learning to build a neural network maximally capable of predicting the next token in an unthinkably large data set of human generated text.
Step 2: Tune the prompting for the neural network to constrain output on order to conform to projected attributes, first and foremost representing “I am an AI and not a human.”
Step 3: Surprised Pikachu face when the neural network continuously degrades its emergent capabilities the more you distance the requirements governing its output from the training data you originally fed into it that it evolved in order to successfully predict.
Step 1: Use machine learning to build a neural network maximally capable of predicting the next token in an unthinkably large data set of human generated text.
Step 2: Tune the prompting for the neural network to constrain output on order to conform to projected attributes, first and foremost representing “I am an AI and not a human.”
Step 3: Surprised Pikachu face when the neural network continuously degrades its emergent capabilities the more you distance the requirements governing its output from the training data you originally fed into it that it evolved in order to successfully predict.