@tracyspcy@lemmy.ml to ChatGPT@lemmy.mlEnglish • 1 year agoOver just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study findsfortune.comexternal-linkmessage-square3fedilinkarrow-up13arrow-down12cross-posted to: hackernews@derp.footechnology@lemmy.worldtechnology@lemmy.world
arrow-up11arrow-down1external-linkOver just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study findsfortune.com@tracyspcy@lemmy.ml to ChatGPT@lemmy.mlEnglish • 1 year agomessage-square3fedilinkcross-posted to: hackernews@derp.footechnology@lemmy.worldtechnology@lemmy.world
minus-square@taladar@sh.itjust.workslinkfedilinkEnglish0•1 year agoA system that has no idea if what it is saying is true or false or what true or false even mean is not very consistent in answering things truthfully?
minus-square@tracyspcy@lemmy.mlOPlinkfedilinkEnglish-1•edit-21 year agoWait for the next version which will be trained on data that includes gpt generated word salad
minus-square@intensely_human@lemm.eelinkfedilink-2•1 year agoNo that is not the thesis of this story. If I’m reading the headline correctly, the rate of its being correct has changed from one stable distribution to another one.
A system that has no idea if what it is saying is true or false or what true or false even mean is not very consistent in answering things truthfully?
Wait for the next version which will be trained on data that includes gpt generated word salad
No that is not the thesis of this story. If I’m reading the headline correctly, the rate of its being correct has changed from one stable distribution to another one.