- cross-posted to:
- hackernews@lemmy.bestiver.se
- Technology@programming.dev
- cross-posted to:
- hackernews@lemmy.bestiver.se
- Technology@programming.dev
cross-posted from: https://programming.dev/post/36160327
Comments
LLMs, as the name suggests, are language models - not knowledge machines. Answering questions correctly isn’t what they’re designed to do. The fact that they get anything right isn’t because they “know” things, but because they’ve been trained on a lot of correct information. That’s why they come off as more intelligent than they really are. At the end of the day, they were built to generate natural-sounding language - and that’s all. Just because something can speak doesn’t mean it knows what it’s talking about.
My sister is a teacher and now has her students repeat back to her, “LLMs are not search engines”.
who needs the web?
Anybody who cares enough to confirm whatever stupid bullshit the AI probabilistically regurgitated without actual understanding.
Seriously, in my experience AI generated results are only actually correct maybe 10% of the time
where will ai steal the info from without the web?
Death to CSS and we just render for the AI crawlers?
How else would I access the AI? 🤷♂️
who needs the web?
Generative AI that’s who. For it to spit out information, it needs information. The argument may be that it is better at collecting all that information in one place and returning an answer, although we know it hallucinates responses. But to even begin a response it needs data, it needs the web.