I experimented with rules like : “Summarize everything of our discussion into one text you can use as memory below your answer.” And “summarize and remove unnecessary info from this text, if contradictions occur act curious to solve them”… simply to mimic a short term memory.
It kind of worked better for problem solving but it ate tokens like crazy and the answers took longer and longer. The current GPT4 models seem to do something similar in the background.
I think that’s still different from what I’m thinking of of interim steps, though.
…but as I think how to explain I realize I’m about to blather about things I don’t understand, or at least haven’t had time to think about! So I’d better leave it there!
I experimented with rules like : “Summarize everything of our discussion into one text you can use as memory below your answer.” And “summarize and remove unnecessary info from this text, if contradictions occur act curious to solve them”… simply to mimic a short term memory.
It kind of worked better for problem solving but it ate tokens like crazy and the answers took longer and longer. The current GPT4 models seem to do something similar in the background.
I would really like to get into LLM and AI development but the math…woosh right over my head.
I think that’s still different from what I’m thinking of of interim steps, though.
…but as I think how to explain I realize I’m about to blather about things I don’t understand, or at least haven’t had time to think about! So I’d better leave it there!