Am I the only one getting agitated by the word AI (Artificial Intelligence)?

Real AI does not exist yet,
atm we only have LLMs (Large Language Models),
which do not think on their own,
but pass turing tests
(fool humans into thinking that they can think).

Imo AI is just a marketing buzzword,
created by rich capitalistic a-holes,
who already invested in LLM stocks,
and now are looking for a profit.

  • PonyOfWar@pawb.social
    link
    fedilink
    arrow-up
    83
    arrow-down
    2
    ·
    1 year ago

    The word “AI” has been used for way longer than the current LLM trend, even for fairly trivial things like enemy AI in video games. How would you even define a computer “thinking on its own”?

      • Lath@kbin.social
        link
        fedilink
        arrow-up
        8
        arrow-down
        1
        ·
        1 year ago

        But will they be depressed or will they just simulate it because they’re too lazy to work?

          • meyotch@slrpnk.net
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            It’s only tangentially related to the topic, since it involves brain enhancements, not ‘AI’. However, you may enjoy the short story “Reasons to be cheerful” by Greg Egan.

        • JackFrostNCola@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          If they are too lazy to work that would imply they have motivation and choice beyond “doing what my programming tells me to do ie. input, process, output”. And if they have the choice not to do work because they dont ‘feel’ like doing it (and not a programmed/coded option given to them to use) then would they not be thinking for themselves?

      • PonyOfWar@pawb.social
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        1 year ago

        Not sure about that. A LLM could show symptoms of depression by mimicking depressed texts it was fed. A computer with a true consciousness might never get depression, because it has none of the hormones influencing our brain.

        • Deceptichum@kbin.social
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          Me: Pretend you have depression

          LLM: I’m here to help with any questions or support you might need. If you’re feeling down or facing challenges, feel free to share what’s on your mind. Remember, I’m here to provide information and assistance. If you’re dealing with depression, it’s important to seek support from qualified professionals like therapists or counselors. They can offer personalized guidance and support tailored to your needs.

          • PonyOfWar@pawb.social
            link
            fedilink
            arrow-up
            8
            ·
            1 year ago

            Give it the right dataset and you could easily create a depressed sounding LLM to rival Marvin the paranoid android.

        • Feathercrown@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Hormones aren’t depression, and for that matter they aren’t emotions either. They just cause them in humans. An analogous system would be fairly trivial to implement in an AI.

          • PonyOfWar@pawb.social
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            That’s exactly my point though, as OP stated we could detect if an AI was truly intelligent if it developed depression. Without hormones or something similar, there’s no reason to believe it ever would develop those on its own. The fact that you could artificially give it depressions is besides the point.

            • Feathercrown@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              I don’t think we have the same point here at all. First off, I don’t think depression is a good measure of intelligence. But mostly, my point is that it doesn’t make it less real when hormones aren’t involved. Hormones are simply the mediator that causes that internal experience in humans. If a true AI had an internal experience, there’s no reason to believe that it would require hormones to be depressed. Do text-to-speech systems require a mouth and vocal chords to speak? Do robots need muscle fibers to walk? Do LLMs need neurons to form complete sentences? Do cameras need eyes to see? No, because it doesn’t matter what something is made of. Intelligence and emotions are made of signals. What those signals physically are is irrelevant.

              As for giving it feelings vs it developing them on its own-- you didn’t develop the ability to feel either. That was the job of evolution, or in the case of AI, it could be intentionally designed. It could also be evolved given the right conditions.

              • PonyOfWar@pawb.social
                link
                fedilink
                arrow-up
                2
                ·
                1 year ago

                First off, I don’t think depression is a good measure of intelligence.

                Exactly. Which is why we shouldn’t judge an AIs intelligence based on whether it can develop depression. Sure, it’s feasible it could develop it through some other mechanism. But there’s no reason to assume it would, in absence of the factors that cause depressions in humans.

          • Markimus@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            Sorry, to be clear I meant it can mimic the conversational symptoms of depression as if it actually had depression; there’s no understanding there though.

            You can’t use that as a metric because you wouldn’t be able to tell the difference between real depression and trained depression.

    • Ratulf@feddit.de
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      The best thing is enemy “AI” only needs to be made worse right away after creating it. First they’ll headshot everything across the map in milliseconds. The art is to make it dumber.

    • Meowoem@sh.itjust.works
      link
      fedilink
      arrow-up
      8
      arrow-down
      2
      ·
      1 year ago

      It’s a computer science term that’s been used for this field of study for decades, it’s like saying calling a tomato a fruit is a marketing decision.

      Yes it’s somewhat common outside computer science to expect an artificial intelligence to be sentient because that’s how movies use it. John McCarthy’s which coined the term in 1956 is available online if you want to read it

      • ToRA@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        “Quantum” is a scientific term, yet it’s used as a gimmicky marketing term.

        • Meowoem@sh.itjust.works
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          Yes perfect example, people use quantum as the buzzword in every film so people think of it as a silly thing but when CERN talk about quantum communication or using circuit quantum electrodynamics then it’d be silly to try and tell them they’re wrong.

    • UnityDevice@startrek.website
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      They didn’t just start calling it AI recently. It’s literally the academic term that has been used for almost 70 years.

      The term “AI” could be attributed to John McCarthy of MIT (Massachusetts Institute of Technology), which Marvin Minsky (Carnegie-Mellon University) defines as "the construction of computer programs that engage in tasks that are currently more satisfactorily performed by human beings because they require high-level mental processes such as: perceptual learning, memory organization and critical reasoning. The summer 1956 conference at Dartmouth College (funded by the Rockefeller Institute) is considered the founder of the discipline.

      • 9bananas@lemmy.world
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        1 year ago

        perceptual learning, memory organization and critical reasoning

        i mean…by that definition nothing currently in existence deserves to be called “AI”.

        none of the current systems do anything remotely approaching “perceptual learning, memory organization, and critical reasoning”.

        they all require pre-processed inputs and/or external inputs for training/learning (so the opposite of perceptual), none of them really do memory organization, and none are capable of critical reasoning.

        so OPs original question remains:

        why is it called “AI”, when it plainly is not?

        (my bet is on the faceless suits deciding it makes them money to call everything “AI”, even though it’s a straight up lie)

        • UnityDevice@startrek.website
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          1 year ago

          so OPs original question remains: why is it called “AI”, when it plainly is not?

          Because a bunch of professors defined it like that 70 years ago, before the AI winter set in. Why is that so hard to grasp? Not everything is a conspiracy.

          I had a class at uni called AI, and no one thought we were gonna be learning how to make thinking machines. In fact, compared to most of the stuff we did learn to make then, modern AI looks godlike.

          Honestly you all sound like the people that snidely complain how it’s called “global warming” when it’s freezing outside.

          • 9bananas@lemmy.world
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            1 year ago

            just because the marketing idiots keep calling it AI, doesn’t mean it IS AI.

            words have meaning; i hope we agree on that.

            what’s around nowadays cannot be called AI, because it’s not intelligence by any definition.

            imagine if you were looking to buy a wheel, and the salesperson sold you a square piece of wood and said:

            “this is an artificial wheel! it works exactly like a real wheel! this is the future of wheels! if you spin it in the air it can go much faster!”

            would you go:

            “oh, wow, i guess i need to reconsider what a wheel is, because that’s what the salesperson said is the future!”

            or would you go:

            “that’s idiotic. this obviously isn’t a wheel and this guy’s a scammer.”

            if you need to redefine what intelligence is in order to sell a fancy statistical model, then you haven’t invented intelligence, you’re just lying to people. that’s all it is.

            the current mess of calling every fancy spreadsheet an “AI” is purely idiots in fancy suits buying shit they don’t understand from other fancy suits exploiting that ignorance.

            there is no conspiracy here, because it doesn’t require a conspiracy; only idiocy.

            p.s.: you’re not the only one here with university credentials…i don’t really want to bring those up, because it feels like devolving into a dick measuring contest. let’s just say I’ve done programming on industrial ML systems during my bachelor’s, and leave it at that.

            • UnityDevice@startrek.website
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              1 year ago

              These arguments are so overly tired and so cyclic that AI researchers coined a name for them decades ago - the AI effect. Or succinctly just: “AI is whatever hasn’t been done yet.”

              • 9bananas@lemmy.world
                link
                fedilink
                arrow-up
                1
                arrow-down
                1
                ·
                1 year ago

                i looked it over and … holy mother of strawman.

                that’s so NOT related to what I’ve been saying at all.

                i never said anything about the advances in AI, or how it’s not really AI because it’s just a computer program, or anything of the sort.

                my entire argument is that the definition you are using for intelligence, artificial or otherwise, is wrong.

                my argument isn’t even related to algorithms, programs, or machines.

                what these tools do is not intelligence: it’s mimicry.

                that’s the correct word for what these systems are capable of. mimicry.

                intelligence has properties that are simply not exhibited by these systems, THAT’S why it’s not AI.

                call it what it is, not what it could become, might become, will become. because that’s what the wiki article you linked bases its arguments on: future development, instead of current achievement, which is an incredibly shitty argument.

                the wiki talks about people using shifting goal posts in order to “dismiss the advances in AI development”, but that’s not what this is. i haven’t changed what intelligence means; you did! you moved the goal posts!

                I’m not denying progress, I’m denying the claim that the goal has been reached!

                that’s an entirely different argument!

                all of the current systems, ML, LLM, DNN, etc., exhibit a massive advancement in computational statistics, and possibly, eventually, in AI.

                calling what we have currently AI is wrong, by definition; it’s like saying a single neuron is a brain, or that a drop of water is an ocean!

                just because two things share some characteristics, some traits, or because one is a subset of the other, doesn’t mean that they are the exact same thing! that’s ridiculous!

                the definition of AI hasn’t changed, people like you have simply dismissed it because its meaning has been eroded by people trying to sell you their products. that’s not ME moving goal posts, it’s you.

                you said a definition of 70 years ago is “old” and therefore irrelevant, but that’s a laughably weak argument for anything, but even weaker in a scientific context.

                is the Pythagorean Theorem suddenly wrong because it’s ~2500 years old?

                ridiculous.

  • Daxtron2@startrek.website
    link
    fedilink
    arrow-up
    37
    arrow-down
    1
    ·
    1 year ago

    I’m more infuriated by people like you who seem to think that the term AI means a conscious/sentient device. Artificial intelligence is a field of computer science dating back to the very beginnings of the discipline. LLMs are AI, Chess engines are AI, video game enemies are AI. What you’re describing is AGI or artificial general intelligence. A program that can exceed its training and improve itself without oversight. That doesn’t exist yet. AI definitely does.

    • MeepsTheBard@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      13
      arrow-down
      2
      ·
      1 year ago

      I’m even more infuriated that AI as a term is being thrown into every single product or service released in the past few months as a marketing buzzword. It’s so overused that formerly fun conversations about chess engines and video game enemy behavior have been put on the same pedestal as CyberDook™, the toilet that “uses AI” (just send pics of your ass to an insecure server in Indiana).

      • Daxtron2@startrek.website
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        I totally agree with that, it has recently become a marketing buzzword. It really does drag down the more interesting recent discoveries in the field.

    • KingRandomGuy@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      Right, as someone in the field I do try to remind people of this. AI isn’t defined as this sentient general intelligence (frankly its definition is super vague), even if that’s what people colloquially think of when they hear the term. The popular definition of AI is much closer to AGI, as you mentioned.

    • dutchkimble@lemy.lol
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      It doesn’t rhyme, And the content is not really interesting, Maybe it’s just a rant, But with a weird writing format.

  • PrinceWith999Enemies@lemmy.world
    link
    fedilink
    arrow-up
    25
    arrow-down
    2
    ·
    1 year ago

    I’d like to offer a different perspective. I’m a grey beard who remembers the AI Winter, when the term had so over promised and under delivered (think expert systems and some of the work of Minsky) that using the term was a guarantee your project would not be funded. That’s when the terms like “machine learning” and “intelligent systems” started to come into fashion.

    The best quote I can recall on AI ran along the lines of “AI is no more artificial intelligence than airplanes are doing artificial flight.” We do not have a general AI yet, and if Commander Data is your minimum bar for what constitutes AI, you’re absolutely right, and you can define it however you please.

    What we do have are complex adaptive systems capable of learning and problem solving in complex problem spaces. Some are motivated by biological models, some are purely mathematical, and some are a mishmash of both. Some of them are complex enough that we’re still trying to figure out how they work.

    And, yes, we have reached another peak in the AI hype - you’re certainly not wrong there. But what do you call a robot that teaches itself how to walk, like they were doing 20 years ago at MIT? That’s intelligence, in my book.

    My point is that intelligence - biological or artificial - exists on a continuum. It’s not a Boolean property a system either has or doesn’t have. We wouldn’t call a dog unintelligent because it can’t play chess, or a human unintelligent because they never learned calculus. Are viruses intelligent? That’s kind of a grey area that I could argue from either side. But I believe that Daniel Dennett argued that we could consider a paramecium intelligent. Iirc, he even used it to illustrate “free will,” although I completely reject that interpretation. But it does have behaviors that it learned over evolutionary time, and so in that sense we could say it exhibits intelligence. On the other hand, if you’re going to use Richard Feynman as your definition of intelligence, then most of us are going to be in trouble.

    • NABDad@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 year ago

      My AI professor back in the early 90’s made the point that what we think of as fairly routine was considered the realm of AI just a few years earlier.

      I think that’s always the way. The things that seem impossible to do with computers are labeled as AI, then when the problems are solved, we don’t figure we’ve created AI, just that we solved that problem so it doesn’t seem as big a deal anymore.

      LLMs got hyped up, but I still think there’s a good chance they will just be a thing we use, and the AI goal posts will move again.

    • Rikj000@discuss.tchncs.deOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      But what do you call a robot that teaches itself how to walk

      In it’s current state,
      I’d call it ML (Machine Learning)

      A human defines the desired outcome,
      and the technology “learns itself” to reach that desired outcome in a brute-force fashion (through millions of failed attempts, slightly inproving itself upon each epoch/iteration), until the desired outcome defined by the human has been met.

      • 0ops@lemm.ee
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        To be fair, I think we underestimate just how brute-force our intelligence developed. We as a species have been evolving since single-celled organisms, mutation by mutation over billions of years, and then as individuals our nervous systems have been collecting data from dozens of senses (including hormone receptors) 24/7 since embryo. So before we were even born, we had some surface-level intuition for the laws of physics and the control of our bodies. The robot is essentially starting from square 1. It didn’t get to practice kicking Mom in the liver for 9 months - we take it for granted, but that’s a transferable skill.

        Granted, this is not exactly analogous to how a neural network is trained, but I don’t think it’s wise to assume that there’s something “magic” in us like a “soul”, when the difference between biological and digital neural networks could be explained by our “richer” ways of interacting with the environment (a body with senses and mobility, rather than a token/image parser) and the need for a few more years/decades of incremental improvements to the models and hardware

        • rambaroo@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          A baby isn’t just learning to walk. It also makes its own decisions constantly and has emotions. An LLM is not an intelligence no matter how hard you try to argue that it is. Just because the term has been used for a long time didn’t mean it’s ever been used correctly.

          It’s actually stunning to me that people are so hyped on LLM bullshit that they’re trying to argue it comes anywhere close to a sentient being.

          • Blueberrydreamer@lemmynsfw.com
            link
            fedilink
            arrow-up
            0
            arrow-down
            1
            ·
            1 year ago

            You completely missed my point obviously. I’m trying to get you to consider what “intelligence” actually means. Is intelligence the ability to learn? Make decisions? Have feelings? Outside of humans, what else possesses your definition of intelligence? Parrots? Mice? Spiders?

            I’m not comparing LLMs to human complexity, nor do I particularly give a shit about them in my daily life. I’m just trying to get you to actually examine your definition of intelligence, as you seem to use something specific that most of our society doesn’t.

    • Pipoca@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Exactly.

      AI, as a term, was coined in the mid-50s by a computer scientist, John McCarthy. Yes, that John McCarthy, the one who invented LISP and helped develop Algol 60.

      It’s been a marketing buzzword for generations, born out of the initial optimism that AI tasks would end up being pretty easy to figure out. AI has primarily referred to narrow AI for decades and decades.

    • Fedizen@lemmy.world
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      1 year ago

      on the other hand calculators can do things more quickly than humans, this doesn’t mean they’re intelligent or even on the intelligence spectrum. They take an input and provide and output.

      The idea of applying intelligence to a calculator is kind of silly. This is why I still prefer words like “algorithms” to “AI” as its not making a “decision”. Its making a calculation, its just making it very fast based on a model and is prompt driven.

      Actual intelligence doesn’t just shut off the moment its prompted response ends - it keeps going.

      • 0ops@lemm.ee
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        I personally wouldn’t consider a neutral network an algorithm, as chance is a huge factor: whether you’re training or evaluating you’ll never get quite the same results

  • angstylittlecatboy@reddthat.com
    link
    fedilink
    English
    arrow-up
    21
    ·
    1 year ago

    I’m agitated that people got the impression “AI” referred specifically to human-level intelligence.

    Like, before the LLM boom it was uncontroversial to refer to the bots in video games as “AI.” Now it gets comments like this.

    • Loki@feddit.de
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      I wholeheartedly agree, people use the term “AI” nowadays to refer to a very specific subcategory of DNNs (LLMs), but yeah, it used to refer to any more or less “”“smart”“” algorithm performing… Something on a set of input parameters. SVMs are AI, decision forests are AI, freaking kNN is AI, “artificial intelligence” is a loosely defined concept, any algorithm that aims to mimic human behaviour can be called AI and I’m getting a bit tired of hearing people say “AI” when they mean gpt-4 or stable diffusion.

      • Kedly@lemm.ee
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        1 year ago

        I’ve had freaking GAMERS tell me that “It isnt real AI” at this point… No shit, the Elites in Halo aren’t Real AI either

        Edit: Keep the downvotes coming anti LLMers, your tears are delicious

    • Paradachshund@lemmy.today
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 year ago

      I’ve seen that confusion, too. I saw someone saying AI shouldn’t be controversial because we’ve already had AI in video games for years. It’s a broad and blanket term encompassing many different technologies, but people act like it all means the same thing.

  • Gabu@lemmy.world
    link
    fedilink
    arrow-up
    16
    ·
    1 year ago

    I’ll be direct, your texts reads like you only just discovered AI. We have much more than “only LLMs”, regardless of whether or not these other models pass turing tests. If you feel disgruntled, then imagine what people who’ve been researching AI since the 70s feel like…

  • MeetInPotatoes@lemmy.ml
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    1
    ·
    1 year ago

    Maybe just accept it as shorthand for what it really means.

    Some examples:

    We say Kleenex instead of facial tissue, Band-Aid instead of bandage, I say that Siri butchered my “ducking” text again when I know autocorrect is technically separate.

    We also say, “hang up on someone” when there is no such thing anymore

    Hell, we say “cloud” when we really mean “someone’s server farm”

    Don’t get me started on “software as a service” too …a bullshit fancy name for a subscription website that actually has some utility.

  • LainTrain@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    16
    arrow-down
    1
    ·
    1 year ago

    The distinction between AI and AGI (Artificial General Intelligence) has been around long before the current hype cycle.

    • fidodo@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 year ago

      What agitates me is all the people misusing the words and then complaining about what they don’t actually mean.

  • ikidd@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    1 year ago

    As a farmer, my kneejerk interpretation is “artificial insemination” and I get confused for a second every time.

  • ZzyzxRoad@sh.itjust.works
    link
    fedilink
    arrow-up
    17
    arrow-down
    4
    ·
    1 year ago

    Yes, but I’m more annoyed with posts and conversations about it that are like this one. People on Lemmy swear they hate how uninformed and stupid the average person is when it comes to AI, they hate the click bait articles etc etc. Aaand then there’s at least 5 different posts about it on the front page every. single. day., with all the comments saying exactly the same thing they said the day before, which is:

    “Users are idiots for trusting a tech company, it’s not Google’s responsibility to keep your private data safe.” “No one understands what ‘AI’ actually means except me.” “Every middle-America dad, grandma and 10 year old should have their very own self hosted xyz whatever LLM, and they’re morons if they don’t and they deserve to have their data leaked.” And can’t forget the ubiquitous arguments about what “copyright infringement” means when all the comments are actually in agreement, but they still just keep repeating themselves over and over.

  • Rooki@lemmy.world
    link
    fedilink
    arrow-up
    18
    arrow-down
    5
    ·
    1 year ago

    Yes your summary is correct, its just a buzzword.

    You can still check if its a real human if you do something really stupid or speak or write giberisch. Almost every AI will try to reply to it or say “Sorry i couldnt understand it” or recent events ( most of the LLMs arent trained on the newest events )

  • geekworking@lemmy.world
    link
    fedilink
    arrow-up
    12
    ·
    1 year ago

    I started reading it as “Al” as in the nickname for Allen.

    Makes the constant stream of headlines a bit more entertaining, imagining all of the stuff that this guy Al is up to.

  • LucidNightmare@lemm.ee
    link
    fedilink
    arrow-up
    12
    ·
    1 year ago

    I just get tired of seeing all the dumb ass ways it’s trying to be incorporated into every single thing even though it’s still half-baked and not very useful for a very large amount of people. To me, it’s as useful as a toy is. Fun for a minute or two, and then you’re just reminded how awful it is and drop it in the bin to play with when you’re bored enough to.

    • kameecoding@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      I just get tired of seeing all the dumb ass ways it’s trying to be incorporated into every single thing even though it’s still half-baked and not very useful for a very large amount of people.

      https://i.imgflip.com/2p3dw0.jpg?a473976

      This is nothing but the latest craze, it was drones, then Crypto then Metaverse now it’s AI.

      • PraiseTheSoup@lemm.ee
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        Metaverse was never a craze. Facebook would like you to believe it has more than a dozen users, but it doesn’t.

        • Eccitaze@yiffit.net
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          The broader metaverse–mainly VRChat–had a brief boom during the pandemic, and several conventions (okay yeah it’s furries) held events in there instead since they were unable to hold in-person events. It’s largely faded away though as pandemic restrictions relaxed

  • flop_leash_973@lemmy.world
    link
    fedilink
    arrow-up
    11
    arrow-down
    1
    ·
    1 year ago

    The term is so over used at this point I could probably start referring to any script I write that has condition statements in it and convince my boss I have created our own “AI”.

    • TeckFire@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      2
      ·
      edit-2
      1 year ago

      For real. Like some enemies in Killzone 2 “act” pretty clever, but aren’t using anything close to LLM, let alone “AI,” but I bet you if you implemented their identical behavior into a modern 2024 game and marketed it as the enemies having “AI” everyone would believe you in a heartbeat.

      It’s just too overencompasing. Saying “large language model technology” may not be as eye catching, but it means I know if you at least used the technology. Anyone can market as “AI” and it could be an excel formula for all I know.

      • Gabu@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        The enemies in killzone do use AI… the Goombas in the first Super Mario bros. used AI. This term has been used to refer to npc behavior since the dawn of videogames.

        • TeckFire@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          1 year ago

          I know. That’s not my point. I know that technically, “AI” could mean anything that gives the illusion of intelligence artificially. My use of the term was more of the OP, that of a machine achieving sapience, not just the illusion of one. It’s just down to definitions. I just prefer to use the term in a different way, and wish it was, but I accept that the world does not