Google has plunged the internet into a “spiral of decline”, the co-founder of the company’s artificial intelligence (AI) lab has claimed.
Mustafa Suleyman, the British entrepreneur who co-founded DeepMind, said: “The business model that Google had broke the internet.”
He said search results had become plagued with “clickbait” to keep people “addicted and absorbed on the page as long as possible”.
Information online is “buried at the bottom of a lot of verbiage and guff”, Mr Suleyman argued, so websites can “sell more adverts”, fuelled by Google’s technology.
The part about Google isn’t wrong.
But the second half of the article, where he says that AI chatbots will replace Google search because they give more accurate information, that simply is not true.
I’d say they at least give more immediately useful info. I’ve got to scroll past 5-8 sponsored results and then the next top results are AI generated garbage anyways.
Even though I think he’s mostly right, the AI techbro gameplan is obvious. Position yourself as a better alternative to Google search, burn money by the barrelful to capture the market, then begin enshitification.
In fact, enshitification has already begun; responses are comparatively expensive to generate. The more users they onboard, the more they have to scale back the quality of those responses.
ChatGPT is already getting worse at code commenting and programming.
The problem is that enshitification is basically a requirement in a capitalist economy.
I mean most top searches are AI generated bullshit nowadays anyway. Adding Reddit to a search is basically the only decent way to get a proper answer. But those answers are not much more reliable than ChatGPT. You have to use the same sort of skepticism and fact checking regardless.
Google has really gotten horrible over the years.
Most of the results after the first page on Google are usually the same as the usable results, just mirrored on some shady site full of ads and malware.
I already go to ChatGPT more than Google. If you pay for it then the latest version can access the internet and if it doesn’t know the answer to something it’ll search the internet for you. Sometimes I come across a large clickbait page and I just give ChatGPT the link and tell it to get the information from it for me.
Do you fact-check the answers?
It depends what you’re using it for as to whether you need to fact check stuff.
I’m a software developer and if I can’t remember how to do an inner join in SQL then I can easier ask ChatGPT to do it for me and I will know if it is right or not as this is my field of expertise.
If I’m asking it how to perform open heart surgery on my cat, then sure I’m probably going to want several second opinions as that is not my area of expertise.
When using a calculator do you use two different calculators to check that the first one isn’t lying?
Also, you made a massive assumption that the stuff OP was using it for was something that warranted fact checking.
I can see why you would use it. Why would I want to search Google for inner joins sql when it is going to give me so many false links that don’t give me the info in need in a concise manner.
Even time wasting searches have just been ruined. Example: Top Minecraft Java seeds 1.20. Will give me pages littered with ads or the awful page 1-10 that you must click through.
Many websites are literally unusable at this point and I use ad blockers and things like consent-o-matic. But there are still pop up ads, sub to our newsletter, scam ads etc. so much so that I’ll just leave the site and forego learning the new thing I wanted to learn.
The new release of GPT-4 searches Bing, reads the results, summarizes, and provides sources, so it’s easier to fact check than ever if you need to.
It’s pretty trivial to fact check an answer… You should start using this kind of bots more. Check perplexity.ai for a free version.
Sources are referenced and linked.
Don’t judge on chatgpt free version
Perplexity.ai has been my go to for this reason.
It often brings up bad solutions to a problem and checking the sources it references shows it regulary misses the gist of these sources.
There sources it selects are often not the ones I end up using. They are starting point, but not the best starting point.
What it is good for is for finding content when I don’t know the terminology of the domain. It is a starting point ready to lead me astray with exquisitely written content.
Find trustworthy sources and use them.
It is more of a proof of concept at the moment, but it shows the potential
That’s what’s usually gets said about lots of alternative fusion energy generation methods that later turn out to be impossible to have net-positive energy generation.
And this is just one example. Another example: tons of “neat concept that shows potential” medical compounds end up dropped at the medical testing stage because of their nasty side effects or it turns out their “positive” effects are indistinguisheable from the placebo effect.
The point being that you can’t actually extrapolative from “neat concept that shows potential” even to merelly “will work”, much less to “will be a great success”.
PS: Equally, one can’t just say it’s not going to be a great success - being a “neat concept that shows potential” has a pretty low informational content when it comes to predicting the future, worse so when there are people monetarilly heavilly invested into it who have a strong interest in making it look like a “neat concept that shows potential” whilst hiding any early stage problem as they’re activelly poluting the information space around it.
You are mixing sci-fi level of cutting edge basic research (fusion), with commercial products (chatgpt). They are 2 very different type of proof of concepts.
And both will likely revolutionize human society. Fusion will simply commercially become a thing in 30/50 years. AI has been on the market for years now. Generative models are also few years old. They are simply becoming better and now new products can be built on top of them
(btw I already use chatgpt 4 productively every day for my daily work and it helps me a lot)
I seem to not have explained myself correctly.
This specific tech you seem to be emotionally invested in is no different from the rest in this sense because it still faces in the real world the very same kind of risks and pitfalls as the rest - there are possible internal pitfalls inherent to every new technology (i.e. a problem we never knew about because we never used it with so many people in the real world before, becomes visible with widespread use) and there are possible external pitfalls inherent to how it fits in the complex world we live in (i.e. it turns out the use cases don’t make quite as much economic sense as was first tought or it indirectly generates more problems than it solves).
Such Process and Fit risks are true for every early stage “revolutionary” tech (i.e. we never did it before, now that we do it, we discover problems we were not at all aware of before) - business guys might say that revolutionary tech starts with a lot more “unknown unknowns” than incremental improvements on existing tech - and is why the bean counters rarelly put money in revolutionary and instead go mainly for incremental improvements on proven tech. At times one or more of such “we had no idea this could happen problems” turn out to be insurmountable, sometimes they can be overcomed but the result is not especially great, sometimes they’re all overcomed without any nasty side-effects and the thing ends up being a world-changing tech: you can’t really tell upfront.
In the case of LLMs, the two risky problems from what I’ve heard which might stop it from being “world changing” are in how LLMs being trained in material which includes LLM-generated material actually get worse (so as the Internet gets flooded with LLM-generated material passing for human-generated one, LLMs would get worse and worse) and the other is the so-called Hallucinations, which are really just the natural side effect of them being Language Models hence all that they do is generate compositions of language tokens that pass for human generated language, with no reasoning involved hence cannot validate through inductive or deductive reasoning said “compositions of language tokens”, so LLMs wouldonly usefull for altering format without touching the information (for example, turn lists of cold hard facts into fluffy longwinded text or do the opposite and summarize lots of fluffy text into just the facts) which would still have a big impact in certain professions but not necessarilly be “world changing” (or, even more interestingly, make over time people value “fluffy text” less and less, which would be “world changing” but not in a way that benefits the makers of LLM).
Unless you want to deny the last 4 decades of History in Tech, you can’t logically extrapolate from an early “looks like it might be a success” to “it will be a success”, especially in the era of money-driven overhype we live in.
give it time, algos will fuck those results as well
ChatGPT powers Bing Chat, which can access the internet and find answers for you, no purchase necessary (if you’re not on edge, you might need to install a browser extension to access it as they are trying to push edge still).
Do you fact-check the answers?
That’s such a strange question. It’s almost like you imply that Google results do not need fact checking.
They do. Everything found online does.
With google, it depends on what webpage you end up on. Some require more checking than others, which are more trustworthy
Generative AI can hallucinate about anything
There are no countries in Africa starting with K.
LLMs aren’t trained to give correct answers, they’re trained to generate human-like text. That’s a significant difference.
They also aren’t valuable for asking direct questions like this.
There value comes in with call and response discussions. Being able to pair program and work through a problem for example. It isn’t about it spitting out a working problem, but about it being able to assess a piece of information in a different way than you can, which creates a new analysis of the information.
It’s extraordinarily good at finding things you miss in text.
Yeah. There’s definitely tasks suited to LLMs. I’ve used it to condense text, write emails, and even project planning because they do give decently good ideas if you prompt them right.
Not sure I’d use them for finding information though, even with the ability to search for it. I’d much rather just search for it myself so I can select the sources, then have the LLM process it.
Agree.
I found it more tempting to accept the initial answers I got from GPT4 (and derivatives) because they are so well written. I know there are more like me.
With the advent of working LLMs, reference manuals should gain importance too. I check them more often than before because LLMs have forced me to. Could be very positive.
deleted by creator
Its already happening at my work. Many are using bing AI instead of google.
Don’t worry they’ll start monetizing LLMs and injecting ads into them soon enough and we’ll be back to square one
deleted by creator
Chatgpt flat out hallucinates quite frequently in my experience. It never says “I don’t know / that is impossible / no one knows” to queries that simply don’t have an answer. Instead, it opts to give a plausible-sounding but completely made-up answer.
A good AI system wouldn’t do this. It would be honest, and give no results when the information simply doesn’t exist. However, that is quite hard to do for LLMs as they are essentially glorified next-word predictors. The cost metric isn’t on accuracy of information, it’s on plausible-sounding conversation.
Ask chatgpt “tell me the biography of the famous painter sndrtj” to see how good the bot is at hallucinating an incredible realistic story that never happened.
Oh, they fixed that! But I see you’re using v4.
I suspect that client-side AI might actually be the kind of thing that filters the crap from search results and actually gets you what you want.
That would only be Chat-AI if it turns out natural language queries are better to determine the kind of thing the user is looking for than people trying to craft more traditional query strings.
I’m thinking each person would can train their AI based on which query results they went for in unfiltered queries, with some kind of user provided feedback of suitability to account for click-bait (i.e. somebody selecting a result because it looks good but it turns out its not).
If you aren’t paying for chatgpt, give a look to perplexity.ai, it is free.
You’ll see that sources are references and linked
Don’t judge on the free version of chatgpt
Edit. Why the hell are you guys downvoting a legit suggestion of a new technology in the technology community? What do you expect to find here? Comments on steam engines?
I dunno. There have been quite a few times where I am trying to do something on my computer and I could either spend 5 minutes searching, refining, digging through the results…or I can ask chatgpt and have a workable answer in 5 seconds. And that answer is precisely tailored to my specifics. I don’t have to assume/research how to modify a similar answer to fit my situation.
Obviously it’s dependent on the types of information you need, but for coding, bash scripting, Linux cli, or anything of that nature LLMs have been great and much better than Google searches.
Okay but the problem with that is that LLMs not only don’t have any fidelity at all, they can’t. They are analogous to the language planning centre of your brain, which has to be filtered through your conscious mind to check if it’s talking complete crap.
People don’t realise this and think the bot is giving them real information, but it’s actually just giving them spookily realistic word-salad, which is a big problem.
Of course you can fix this if you add some kind of context engine for them to truly grasp the deeper and wider meaning of your query. The problem with that is that if you do that, you’ve basically created an AGI. That may first of all be extremely difficult and far in the future, and second of all it has ethical implications that go beyond how effective of a search engine it is.
Did you read my last little bit there? I said it depends on the information you are looking for. I can paste error output from my terminal into Google and try to find an answer or I can paste it into chatgpt and be, at the very least pointed in the right direction almost immediately, or even given the answer right away vs getting a stackoverflow link and parsing the responses and comments and following secondary and tiertiary links.
I absolutely understand the stochastic parrot conundrum with LLMs. They have significant drawbacks and they are far from perfect, but then neither is are Google search results. There is still a level of skepticism you have to apply.
One of the biggest mistakes people make is the idea that LLMs and websearching is a zero sum affair. They don’t replace each other. They compliment each other. Imo, google is messing up with their “ai” integration into Google search. It sets the expectation that it is an equivalent function.
I don’t need perfect. I need good enough
Sure but if that becomes the norm then a huge segment of the population will believe the first thing the bot tells them. You might be okay, but we’re talking about an entire society filtering its knowledge through an incredibly effective misinformation engine that will lie rather than say “I don’t know”, because that simple phrase requires a level of self-awareness that eludes a lot of actual people, much less a chatbot.
That’s already a problem. The thing j think about is what will serve me better. Google or chat AI. The risk of bad information exists with both. But an AI based search engine is something that will be much better at finding context, retiring results geared towards my goals and I suspect less prone to fuckery because AI must be trained as a whole
Except we already know that LLMs lie and people in general are not aware of this. Children are using these. When you as a person have to sift through results you get a sense of what information is out there, how sparse it is, etc. When a chatbot word-vomits the first thing it can think of to satisfy your answer, you get none of that, and perhaps you should be aware of that yourself. You don’t really seem to be, it’s like you think the saved time is more important than context, which apparently I have to remind you - the bot doesn’t know context.
When you say:
It makes me think that you really don’t understand how these bots work, and that’s the real danger.
We’re talking in this thread about this wider systemic issue, not just what suits you personally regardless of how much it gaslights you, but if that’s all you care about then you do you I guess ¯\_(ツ)_/¯
Lie is a weird way to describe it. They give you an answer based on probabilities. When they’re off base they call it hallucinating. Its not lying its just lacking in data to give an accurate and correct a answer which will get better with more training and data. Everything else we have so far gets worse. Google isn’t what it was 15 years ago.
I use chatgpt every day to find out answers over google. Its better in almost every single way to get information from and I can only imagine what it’s capable of once it can interface with crawlers.
The language you’re using to speak on this issue makes it seem like theres a personal vendetta against LLM. Why people get so mad at a new tool is always fascinating.