The alarmism around AI is just a marketing spin.
As @pluralistic@mamot.fr wrote: that’s “mystical nonsense about spontaneous consciousness arising from applied statistics”.
Real problems we face with AI are:
Ghost labor, erosion of the rights of artists, costs of automation, the climate impact of data-centers and the human impact of biased, opaque, incompetent and unfit algorithmic systems.
“ai” as people think of it is marketing spin.
Yeah, we had to rename AI to AGI, because marketing fuckers decided they’re naming a (very smart) predictive model an AI. I’ve had a dumber version in my phone decades ago, this should’ve never been called AI.
AI and AGI have never been the same thing.
It’s so annoying how suddenly everyone’s so convinced that “AI” is some highly specific thing that hasn’t been accomplished yet. Artificial intelligence is an extremely broad subject of computer science and things that fit the description have been around for decades. The journal Artificial Intelligence was first published in 1970, 54 years ago.
We’ve got something that’s passing the frickin’ Turing test now, and all of a sudden the term “artificial intelligence” is too good for that? Bah.
We dont have anything that passes the Turing test. The test isnt just “does it trick people casually talking to it into thinking its a person” its can it dwcieve a pannel of experts deliberately try to tease out which one of the “people” they are talking to isnt a human.
AFAIK no LLM has passed a rigourious test like that.
GPT4 ironically fails the Turing test by possessing such a wide knowledge on variety of topics that it’s obvious it can’t be a human. Basically it’s too competent to be a human even despite its flaws
This is my problem with the conversation. It doesn’t “posses knowledge” like we think of with humans. It repeats stuff it’s seen before. It doesn’t understand the context in which it was encountered. It doesn’t know if it came from a sci-fi book or a scientific journal, and it doesn’t understand the difference. It has no knowledge of the world and how things interact. It only appears knowledgeable because it can basically memorize a lot of things, but it doesn’t understand them.
It’s like cramming for a test. You may pass the test, but it doesn’t mean you actually understand the material. You could only repeat what you read. Knowledge requires actually understanding why the material is what it is.
Yeah, and in no way could it. Just ask how many words are in its reply and it will say, “There are 37 words in this reply.” It’s not even vaguely convincing.
Yeah, it should just say “Why would you ask me such a stupid question? Count them yourself.”
Nobody is doing these tests, but it’s not uncommon these days for mistaking something for being AI generated. Even in professional settings, people are hypervigilant.
Nothing can pass the turing test for me, because I’m pretty sure everyone is a robot including me.
Here’s the summary for the wikipedia article you mentioned in your comment:
Artificial Intelligence is a scientific journal on artificial intelligence research. It was established in 1970 and is published by Elsevier. The journal is abstracted and indexed in Scopus and Science Citation Index. The 2021 Impact Factor for this journal is 14. 05 and the 5-Year Impact Factor is 11.
“AI is whatever hasn’t been done yet.”
Decade number seven.
Whenever some dipshit responds to me with “you’re talking about AGI, this is AI”, my only reply is fuck right off.
“This article was sponsored by Roko’s Basalisk/The paperclip maximiser”
Capitalism is the paperclip maximizer, don’t need AGI for that. AI fears are a tool it uses to achieve maximization.
You clearly don’t understand what the paperclip maximiser is. It doesn’t just produce a lot of paperclips. It doesn’t just kill most people. It’s kills everyone and turns the entire planet into paperclips. And if we ignore AI safety by saying ‘nOt My JoBs’, we’re just letting it happen.
Yeah that’s what I’m saying, we’re turn the planet into big numbers in bank accounts. https://www.antipope.org/charlie/blog-static/2018/01/dude-you-broke-the-future.html
“AI” has been diluted to mean nothing but marketing wank. 99.999% of “AI” is single variable linear regression estimation. Almost all of the rest is multi variable linear regression.
Computers are a scam too. I found out they’re nothing but ones and zeros!
Obviously, trivially, blatantly false, because the AI safety people have been at it since long before there was anything to market. Back then, the bullshit criticism was “AI will never be able to understand language or interpret pictures; what harm could it possibly ever do?”
AI still doesn’t “understand” language or pictures or anything else. It’s little more than statistical analysis based on text and input from humans tagging photos. The fact that we can get some neat output is not indicitive that any understanding is going on behind the scenes.
Even today there are a ton of people who simply seem uncapable of playing out the thought experiment about AGI being more competent than humans at virtually everything. They somehow seem to imagine our current generative AI models somehow being a proof that it can never actually deliver and become what the AI safety people have been worried about for decades.
Yeah and now it’s just people fearmongering “AI is the worst thing to happen to artists, musicians, graphic designers - and it steals everyone’s work! And and it might become Skynet! And and…” people who don’t understand technology always seem to do this. They did it with Crypto, did it with NFTs, and now legitimate technologies have to escape a reputational black hole all because scammers and grifters decided to use it maliciously.
people who don’t understand technology always seem to do this. They did it with Crypto, did it with NFTs
😬
Crypto helped third world countries use a more stable form of currency than their government’s. NFTs are no different from the serial codes on dollar bills except virtual. This is why you people are retarded. You know so little and make moronic comments like these.
Yikes.
Your comment is entirely fact-free, it’s almost impressive. There’s no point refuting anything because it’s clear nothing will change your mind, and I don’t have to anyway because it’s all been dismantled piece by piece already.
https://youtube.com/watch?v=YQ_xWvX1n9g
https://vivaldi.com/blog/why-vivaldi-will-never-create-thinkcoin
I know it’s hard to admit you’ve gambled big on a losing horse, but the solution is not to gamble bigger, it’s to quit gambling.
Edited links for clarity.
Here is an alternative Piped link(s):
it’s all been dismantled piece by piece already.
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
“Those morons warned us about these other things (that definitely did have all the problems that they warned about, nothing more annoying than a moron that’s repeatedly proven right. What complete idiots.) Anyway they’re definitely wrong about this thing.”
Do you hear yourself bruv?
I have mixed feelings about “AI” , and you’ve practically convinced me to be against it. I doubt you could craft a less compelling argument if you tried.
Oh nice another person that knows the buzzwords and generally has opinions but has no deeper knowledge whatsoever. You’re the most fun kind of person to argue against.
I’d let you pick which topic you feel the most strongly against but neither you nor I will have the attention span to finish this if we treated it like an actual conversation so I’ll just go after AI since you have mixed feelings.
Yes, generative AI probably shouldn’t be used on copyrighted works however, AI is a tool no different from photoshop. The only difference between someone cutting up pictures of sunsets and melding them together in photoshop is the ungodly time constraints. Keeping in mind that doing the photoshop method is completely covered under DMCA so it’s legally protected, but what about morally? Sure, if you’re concerned about using someone’s work then you’d need AI that was trained on copyright-free training sets and I’m sure some already exist. Though I personally don’t have any objections about copyrighted works being manipulated into something different because that’s the definition of transformative. Even if it starts with something someone owns, it just has to be different enough and often times it is.
Can’t engage with the actual criticism so you just get all huffy and defensive and try to change the topic.
You gotta get a new script, it’s 2024. This shit is boring.
Good retort! I went into deep detail to address your retardation and you just go “nuh uh”. At least call me a tech bro or something else to disregard my position instead of addressing what I actually fucking said.
Cower, Beg, Cry, whatever gets you through the next few years of AI development.
Do you often just have self-righteous conversations with yourself based on nothing but your own delusions?
Kinda worrying.
Of course he does, he quotes Marcus Aurelius in his profile. May as well just block him and move on.
itt people who under no pretext will read the linked article