- cross-posted to:
- hackernews@derp.foo
- chatgpt@lemmy.ml
- cross-posted to:
- hackernews@derp.foo
- chatgpt@lemmy.ml
ChatGPT is losing some of its hype, as traffic falls for the third month in a row::August marked the third month in a row that the number of monthly visits to ChatGPT’s website worldwide was down, per data from Similarweb.
Hasn’t the service also gotten worse? When it first came out, you kept hearing how it could pass the bar exam and medical license test. And now all you hear is that it can’t do basic high school homework without wrong answers. Maybe it was hype in the beginning and it never could do those things.
deleted by creator
It is so neutered now that I rarely find it is worth the time to use it. Now it just gives non-answers that are rarely helpful or accurate.
Edit: To give a bit more context, I still use it several times a day just to be sure it is still a disappointment.
It was always able to do some genuinely amazing things, but it was always limited when you took it beyond its wheelhouse. It helps to think of it not as an “AI” as people keep saying, but as a “text completer” with a huge amount of power within that domain.
Or, another way to think of it is as a super-powerful search engine. If the answers and knowledge you’re asking of it were fed into it as input data at some point in its training, it’ll probably be able to find it and reformat it back to you with a scary amount of smoothness and precision. If you’re asking it to figure out something new, it may be able to fake it in some short-term fashion or another based on what it’s seen, but not with any genuine understanding behind it. That’s just not what it does. I actually have a little private theory that if it was given something like the bar exam in scale and complexity, but an exam was genuinely a whole new novel invention that hadn’t been extensively discussed and represented in its input corpus, it would fail pretty badly. A lot of what humans can do that makes them capable is adapt to new domains – we can teach ourselves to play chess, or do math, or fly airplanes, or play Celeste. GPT is hugely impressive but it’s still only one domain.
I actually don’t believe that it’s gotten substantively less capable. I think there are little ticks up and down in its capability sometimes in particular areas, and people seize on those to conclude that it’s now becoming dumber, but in my experience, the raw API was always quite capable (more so than the somewhat nerfed chat interface), and it was always super-capable with some tasks and not at all capable with others. I think journalists are just now figuring out that, after having studied the issue in their professional capacity for the better part of a year, and reporting on it as if it’s a new thing.
From what I’ve seen, here’s what happened. GPT 4 came out, and it can pass the bar exam and medical boards. Then more recently some studies came out. Some of them from before GPT 4 was released that just finally got out or picked up by the press, others that were poorly done or used GPT 3 (probably because of gpt 4 being expensive) and the press doesn’t pick up on the difference. Gpt 4 is really good and has lots of uses. Gpt 3 has many uses as well but is definitely way more prone to hallucinating.