- cross-posted to:
- artificial_intel@lemmy.ml
- cross-posted to:
- artificial_intel@lemmy.ml
Latest generation of products not becoming part of people’s “routine internet use”, researchers say.
I’m in a country which not only I don’t have access to it but also can’t really afford a subscription. Are GPT-4o or Claude Opus as smart as they say? Can it come up with creative solutions and solve difficult problems which isn’t in it’s dataset?
They’re not smart, but they’re helpful for a lot of things.
They’re a useful tool (unlike me, a useless tool) but man are they being oversold :(
They’re bullshit generators, essentially - it doesn’t matter to them whether they generate something that’s ‘true’ or not, as long as it’s plausible. Depends what you intend to use them for - if you want a throw-away image for a powerpoint slide that will only be looked at once for a few seconds, they’re ideal. They generate shit code and boring, pointless stories, so couldn’t recommend them for that.
If you’re a D&D GM that’s in need of quite a lot of ‘disposable’ material, they’re alright. Image of a bad guy that you can then work into the story? Great. Names for every single Gnomish villager? Great. Creating intricate and interesting lore that brings your world alive? No, they are not actually intelligent and cannot do that - that’s the part that you provide.
At the moment, huge amounts of venture capitalist money is making these things much cheaper than their true cost. Can only imagine the price of them is going to go up a lot when that runs out. You might not be able to afford the subscription, but you’ll be in good company soon.
To address your last point, there have been continual improvements in making LLMs more efficient as well. While I definitely couldn’t run GPT4 on my local machine, I can get decently close to something like 3.5-turbo, which would’ve been unheard of only a year or two ago.
And from the sounds of it, GPT-4o is another big step in efficiency (though it’s hard to say for certain).
Nope
Bing chat works on any browser and I believe it’s based on gpt4
What would we use it for? There aren’t too many use cases imho.
Google is SEO optimized shit nowadays, so I use gpt as search engine replacement. Also coding and IT related questions work well, and I have reduced part of IT consultant hours at work with GPT. Like would feature x work for use case y.
I have more than 80% of my credits on two different imaging Ai products. Prompts and crossing your fingers hoping you get what you want got pretty stale quick. Chat GPT never appealed to me. It seemed less about output than what it could gain from user input. The whole thing seems adversarial to the end user, like “if you don’t like it, your not doing it right” kind of attitude. I think the push to get us to adopt it in everyday use is to lower the responsibility and liability of those pushing it. It’s a bubble like all tech fads, but it will leave some sort of scar on our day to day.
there is a great episode of Silicon Valley where this shitty billionaire uses a Siri like voice bot to control his son. that way he doesn’t have to be the bad guy when he wants his son to do something. this is exactly the power of Ai our corporate masters crave.
(Scarlet Johansson voice) Sorry you were denied an extension on you emergency medical loan. How would you like to pay? Don’t forget to take our survey at the end of the transaction! -Medical Credit Authority Assistant powered by Medi-Ai
These are just the current tech fad that got over hyped just like blockchain did
I find them useful for getting over that initial hurdle of writing documents. I can just write down bullet points, tell ChatGPT to turn it into text, and then revise almost all of what it writes because I don’t like the style and it misinterpreted a bullet point or two. I find that a lot easier than writing from scratch.
Its the classic example of people following the trend and repeating opinions they heard while never actually trying to effectively utilize a thing before deciding to hate it.
You’re saying people don’t use chatgpt in their daily routine because they didn’t give it a fair shot?
Not because it’s just objectively lacking in utility for the things it’s marketed for?
Because people never tried to effectively utilize the tool. If you spend an hour learning how to effectively prompt your chances of getting higher quality gens go up significantly. It still happens obviously but I can confidently say ~80% of the outputs I get are servicable enough for my usecases. The remaining 20% require some manual tweaks and reprompting but I can usually get it there. It works best for things where you know how to ask an effective question already I find. Like just asking a basic question without giving enough information will give you lower quality responses than a more detailed prompt. Its kind of the opposite of what we’ve learned to do with search engines. More information is actually better than less in this case.
This sounds like cope.
Plenty of people have tried, I have tried repeatedly in various distant ways.
Maybe the people aren’t wrong, and chat bots are in fact just not all that useful for most people.
How is me telling my personal experience of how its been helpful cope? It Sounds more copey to fail at using a technology correctly and then blaming it for it.
“The people are just using the technology wrong. If they used it how I do and for what i do, surely they’d see how good it is for them.”
That’s cope. Plenty of worse solutions to invented problems are adopted much faster and more prolifically.