I run a small VPS host and rely on PayPal for payments, mainly because (a) most VPS customers pay that way if you aren’t AWS or GoDaddy and (b) very good fraud protection. My prior venture had quite a bit of chargebacks from Stripe so it went PP-only also.

My dad told me I should “reduce the processing fees” and inaccurately cited that ChatGPT told him PayPal has 5% fees when it really has 3-3.5% fees (plus 49 cents). Yet he insisted 5% was the charge.

Yes, PayPal sucks but ChatGPT sucks even more. When I was a child he said Toontown would ruin my brain, yet LLMs are ruining his even more.

  • Lovable Sidekick@lemmy.world
    link
    fedilink
    English
    arrow-up
    33
    arrow-down
    2
    ·
    2 days ago

    No, if he cited inaccurate information it was because he didn’t check it. Same as if he cited something he heard from a guy on the bus.

    • Robust Mirror@aussie.zone
      link
      fedilink
      arrow-up
      13
      ·
      2 days ago

      I think there’s some shared blame. Chatgpt existing and marketing itself as useful makes people believe it. If you have to double check everything it says, what is the point of using it in the first place? This isn’t unsolicited information from someone you’re chatting to that came up naturally and should be checked, this is something you have to specifically choose to use.

      • Coriza@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        2 days ago

        And Google throw it in your face and you have to be very careful about. For years when you Google something the first thing would be a snipped from a website, so if you Google “PayPal fee” it would show a snipped from a website mentioning a PayPal fee, but now the result in the same place and in the same style is a LLM response.

  • 6nk06@sh.itjust.works
    link
    fedilink
    arrow-up
    132
    arrow-down
    1
    ·
    3 days ago

    Every boomer seems like that: “You shouldn’t trust anyone without fact-checking.”

    30 years later: “Let’s trust every shoe salesman and ChatGPT, they are my new friends.”

    • Initiateofthevoid@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      33
      arrow-down
      1
      ·
      2 days ago

      A lot of people say “you can’t trust anything you read/see/hear” not because they are actually skeptical of their sources, but because it’s a thought-terminating cliche that allows them to continue disbelieving what they want to disbelieve and cherry pick what makes them feel vindicated or righteously angry.

      Cognitive dissonance is a hell of a drug. It turns out almost everyone believes they’re a skeptic, and everyone suffers from confirmation bias.

      Except you and I. And any of you reading this. We know the truth, of course. Don’t trust anyone that tells you otherwise.

      • callouscomic@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 days ago

        Yes, of course. We are also the ONLY people who can safely do [unsafe thing] and anyone else doing it is a moron.

      • Robust Mirror@aussie.zone
        link
        fedilink
        arrow-up
        2
        ·
        2 days ago

        I agree with everything you said. And I get the tone at the end. There are certainly things I believe that are wrong. But while I obviously still have biases, I also have a crippling fear of telling someone something wrong. So generally I at the least won’t say something to someone else unless I’ve triple confirmed it’s correct.

        Which is also how I know I believe things that are wrong, because I’ve looked into something before telling it to people and found it was wrong. I don’t intentionally believe wrong things. But based on that I know there must still be some.

    • Karyoplasma@discuss.tchncs.de
      link
      fedilink
      arrow-up
      35
      arrow-down
      1
      ·
      edit-2
      3 days ago

      I think the intention was to legalize plagiarism under the guise of helping humanity. Only if corporations do it of course.

      • Initiateofthevoid@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        15
        ·
        2 days ago

        The intention was the same as it’s always been since the dawn of industry - to reduce the number of humans involved in the means of production. Workers are pesky middle men who keep interfering with profits by asking for expensive luxuries like living wages and human rights.

        Though I really don’t think any techbro in their deepest ketamine hole actually imagined that the biggest leap in automation since the assembly line was going to start with artistic expression.

  • blitzen@lemmy.ca
    link
    fedilink
    arrow-up
    22
    arrow-down
    2
    ·
    3 days ago

    LLMs are undoubtedly impressive tech that will get better with time. But to anyone singing their praises too emphatically I say ask it something on a topic you are an expert on; you’ll quickly see how fallible they currently are.

    • KingJalopy @lemm.ee
      link
      fedilink
      arrow-up
      13
      ·
      3 days ago

      Problem is a lack of expertise with most people. Most people I interact with are generally oblivious to most things, including their careers lol.

      • Robust Mirror@aussie.zone
        link
        fedilink
        arrow-up
        2
        ·
        2 days ago

        Tbh if they game get them to ask it about that, it fails spectacularly badly, even worse than in general. TV shows and movies it’s a bit better on, probably because there are so many episode summaries and reviews online, but if you talk to it long enough and ask varied and specific enough information it’ll fail there too.

        They may not be an expert at something, but if they have a specific interest or hobby that’ll probably work.

    • asmoranomar@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      6
      ·
      3 days ago

      LLM’s are not AI. Calling it as such says more about the person than the technology. And then I realize, they might be right… /s

      • 0laura@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        2 days ago

        calling LLMs ai isnt wrong at all, it’s just that sci-fi has made people think ai always means something as smart as a human. heck, the simple logic controlling the monsters in Minecraft is called ai.

          • The problem is conflating a technical term “Artificial Intelligence” which is a specific field of study that is just incredibly poorly named (the name came about from aspirational beliefs of the early founders of the field) with what common conversation views as artificial intelligence.

            In a very technical sense of topical taxonomy it is correct to call LLMs “Artificial Intelligence”.

            In any other use of language, calling LLMs Artificial Intelligence is falling for a scam.

            • asmoranomar@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              3
              ·
              2 days ago

              You guys are taking this way too seriously for a sarcastic remark. Perhaps you should put what I said into ChatGPT to have it explained. It seems to understand what I said well. Thanks for the bias check guys

                • asmoranomar@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  ·
                  2 days ago

                  Touche, lol. You are exactly right.

                  But my point stands. It doesn’t have to be sci-fi intelligent, it just has to convince you it is. There have always been people who are not experts trying to explain things they know little about. AI just does that very well, and as you point out, AI (for now) understands nothing.

                  I’m not actually saying “LLM’s are not AI”, that’s the sarcastic trap. What I am implying is that some people aren’t as intelligent enough to know the difference. It bounces on the original comment that unless you ask it something you’re very knowledgeable in, AI feels very ‘right’. But people only read the first sentence and have a meltdown.

                  It’s just Ironic that ChatGPT does a much better job at explaining what I said. Does that mean it’s more intelligent than me? What about the intelligence of the people who missed the sarcasm? I find it amusing.

          • Feathercrown@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            2 days ago

            Get off your high horse and use the same definitions as everyone else. Intentionally choosing to use different ones and then acting like everyone else is stupid is incredibly annoying and proves nothing.

          • Muad'dib@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 days ago

            In Dune, thinking machines are banned. Not even calculators are allowed. Complex maths is done by trained savants. I’d argue a calculator is AI, since it’s a humanmade device that can extrapolate information from information. That’s the definition of intelligence.

      • blitzen@lemmy.ca
        link
        fedilink
        arrow-up
        4
        ·
        edit-2
        2 days ago

        Are you suggesting LLMs aren’t powering ChatGPT, the AI front in the post is about? Or am I just missing your joke (which is possible?)

        • asmoranomar@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 days ago

          It’s sarcasm. And yes, a joke. Along the same lines as ‘Think of your average person, and realize half the population is dumber’.

  • Creat@discuss.tchncs.de
    link
    fedilink
    arrow-up
    16
    arrow-down
    1
    ·
    edit-2
    2 days ago

    He didn’t cite wrong information (only) because of ChatGPT, but because he lacks the instinct (or training, or knowledge) to verify the first result he either sees or likes.

    If he had googled for the information and his first click was an article that was giving him the same false information, he would’ve probably insisted just the same.

    LLMs sure make this worse, as much more information coming out of them is wrong, but the root cause is the same it’s been before their prevalence. Coincidentally it’s the reason misinformation campaigns work so well and are so easy.

    Edit: removed distraction

    • ChatGPT didn’t get the information somewhere. This is because ChatGPT doesn’t have “information” in the first place. ChatGPT has a large database of sentence structures, in effect. And it follows random paths through that to make its output. And somewhere in its language structures it has “5%” near “fees” and happened to use that this time.

      • Ulrich@feddit.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        This is because ChatGPT doesn’t have “information” in the first place

        What? LOL then why are so many companies suing them for copyright?

        • Robust Mirror@aussie.zone
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          1 day ago

          Because it’s complicated. It is fed that data, but it can’t access it, refer to it, look it up or anything like that. If you feed it all of reddit, you can’t just ask it what comments did this user make, it simply doesn’t know. It uses all the data it’s fed to build statistical patterns of language and concepts, which is what it then outputs.

          This is why it can quote things like Shakespeare, because that information is so widely repeated, it’s fed it so many times, it’s a common pattern it can reliably reproduce. But it isn’t looking in some database and finding that Shakespeare quote to repeat, it doesn’t have that ability or information.

            • Robust Mirror@aussie.zone
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              1 day ago

              If you have it search the internet yes, that’s competely different to its default behaviour though. That’s specifically providing it a document to look at after it has been trained, which it can look at and refer to.

    • Ulrich@feddit.org
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      2 days ago

      If he had googled for the information and his first click was an article that was giving him the same false information, he would’ve probably insisted just the same.

      If you’re looking up content written by humans and published to the internet in an article, it is far less likely to be wrong.

      • Creat@discuss.tchncs.de
        link
        fedilink
        arrow-up
        0
        ·
        1 day ago

        It’s a bit less likely to be wrong, but there’s plenty of room for it to be wrong, either maliciously with intent or through incompetence of researching even basic things on their part. Someone being wrong once by misreading, or without interpreting data, or by trying to steer perception of something, can easily snowball into many sources concerning that wrong information (“I’ve read it, so must be true”). Many kinds of information are also very dependant on perspective, adding nuance beyond “correct” and “false”.

        There are plenty of reasons to double check information (seemingly) written by humans, it’s just good to double check that for different reasons than ai content. But the basic idea of “it can easily be wrong” is the same.

  • isekaihero@ani.social
    link
    fedilink
    arrow-up
    14
    arrow-down
    6
    ·
    2 days ago

    I love AI… for entertainment. I use Crushon almost every day. I love chatbots. I can’t wait to see chatbot functionality integrated into games. But I’ve never talked to ChatGPT. I would never trust a chatbot for anything important. I know it’s not going to give me factual information.

    I have a coworker to is constantly asking ChatGPT questions and shares its answers with the rest of us like we should care. Kids growing up with chatbots are so fucked. I didn’t think it would be possible to be fucked worse than Millenials were, but here we are.

    • applemao@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      edit-2
      2 days ago

      Yeah there’s no way younger people wont end up even dumber. Its frightening. And I’m not even old and I can see it. Thats bad.

    • KingJalopy @lemm.ee
      link
      fedilink
      arrow-up
      6
      arrow-down
      1
      ·
      3 days ago

      Ask it if PayPal has a 5% fee? Sounds like he might have been arguing about it and tried to fact check himself and chafgpt told him what he wanted to hear maybe?

  • Lumiluz@slrpnk.net
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    4
    ·
    2 days ago

    The fee varies based on multiple factors, but international with currency conversion it can actually reach an estimated total of 5%.

    So technically the information wasn’t wrong either, it’s most likely your dad asked the wrong question or didn’t understand the answer.

    Searching on DDG gives a similar answer btw.

    It’s not LLMs ruining his brain, he’s just dumb on the topic.

  • Blue_Morpho@lemmy.world
    link
    fedilink
    arrow-up
    6
    arrow-down
    11
    ·
    edit-2
    3 days ago

    Googling could have also returned bad info. Lemmy has bad info. A newspaper could have reported bad info about paypal. Bad info isn’t an AI problem.

    The fact that chatgpt returned bad info means most of the internet has bad info about PayPal’s rates.

    • mozingo@lemmy.world
      link
      fedilink
      English
      arrow-up
      24
      ·
      edit-2
      3 days ago

      Well, sure. But if you go the PayPal website you can see the correct information. Before Google’s AI popped up at the top of the screen, the PayPal website would have. In this situation, Google is now prioritizing pushing the misinformation that their AI found from some outdated website instead of the official PayPal website that has the correct info. That’s the issue.

      • Blue_Morpho@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        8
        ·
        3 days ago

        The OP said chatgpt. I just tried it:

        And I thought it weird that OP said his dad asked chatgpt. Who uses chatgpt instead of Google for stuff like that?

        • vala@lemmy.world
          link
          fedilink
          arrow-up
          13
          ·
          2 days ago

          This screenshot doesn’t really prove anything but that’s not how chatgpt works. It might have given you the right info and someone else the wrong info.

          Even if they were static, deterministic things, which they aren’t in the context of end user services like chatgpt, just giving two slightly different prompts could cause something like this to happen.

        • A very scary number of people do.

          In Summer 2024 I was in Canada visiting family. On the way I stopped by Whistler and went up with my SO in the … whatever you call those suspended car things on wires that take you up mountains. (Too lazy to look it up.) A bunch of people in their '20s were asking ChatGPT about things to do in Ottawa (apparently their next stop) and calling out the things that ht was recommending.

          I recognized about 1/3 of the named restaurants, specifically, as places that had closed. But these people were asking ChatGPT about them and thus getting hallucinated information.

        • mozingo@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          2 days ago

          Ah, yea, sorry, my brain scrambled that. But same point really. Chatgpt doesn’t always pull from the current official website for it’s data either, so same problem. Chatgpt and Google are loudly marketing, “Hey you don’t need to search for the info, our AI will give it to you,” when the Ai is wrong a lot.

    • vala@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      2 days ago

      The problem is a lack of critical thinking skills. There is only one reliable way to get information about this and it’s from the primary source.