• Onno (VK6FLAB)
    link
    fedilink
    English
    366 months ago

    What will it take until people get it through their thick skulls that ChatGPT isn’t intelligent, doesn’t learn and is a tool that can only generate plausible gibberish.

    Using the same tools to detect such gibberish will give you more gibberish.

    Garbage in, Garbage out has been true since the difference engine, it’s just that today the garbage smells like English words, still garbage, but not knowledge, intelligence or anything like it.

    The machine learning approach for building models, used to produce so called large language models like ChatGPT is also used to create weather forecasting models that are bigger, better and orders of magnitude faster than available until now.

    The tools have changed life, but I’m unconvinced that it’s a suitable, sustainable or realistic way to create artificial intelligence, despite claims to the contrary.

    • Scrubbles
      link
      fedilink
      English
      216 months ago

      People are so insistent that it’s ai that it all reminds me of Blockchain. It’s new! It’ll change everything!

      It’ll change some things. What we are seeing now is business forcing it into everything when really, right now, there are only a handful of things it makes sense to use.

      It’s really great at giving you a starting point a very rough outline of something. That is the easy part. The hard part is turning that into something new and coherent, and for that I think modern AI is nowhere close. That needs a human

      • Value SubtractedOPM
        link
        fedilink
        English
        106 months ago

        I think it’s definitely a bubble that will burst eventually.

        At the same time, I don’t think there’s any way to put the toothpaste back in the tube. This technology is out there, and even once the hype has died down, we’re going to be dealing with it forever.

        • Scrubbles
          link
          fedilink
          English
          116 months ago

          In the sense that AI is an extremely general term that involves many different technologies, yes. Generative AI/LLMs are not true AGI, which is what people think it is. It cannot think, it cannot learn, it can only predict.

            • Scrubbles
              link
              fedilink
              English
              86 months ago

              That’s actually pretty good… the techbro equivalent of “We did it!”

          • Logi
            link
            fedilink
            English
            -15 months ago

            It cannot think, it cannot learn, it can only predict.

            That’s a distinction without a difference. If it can predict what a AGI would do in a given situation, then it is an AGI.

            I’m not saying that it is an AGI, but the reason it it isn’t is more than “it can only predict”.

    • @gravitas_deficiency@sh.itjust.works
      link
      fedilink
      English
      46 months ago

      Nobody who’s not an engineer seems to give a shit - or, indeed, even understand - the nuance of LLM technology, or the technical reasons behind its limitations and the implications thereof. Hell, I know a lot of engineers who don’t care or understand it at a meaningful level.

      • @stoly@lemmy.world
        link
        fedilink
        English
        25 months ago

        And some of the engineering types are busy kissing the feet of people like Altman and Musk so they don’t get a chance to even notice.

    • @Bye@lemmy.world
      link
      fedilink
      English
      26 months ago

      Nothing, it seems close enough to most that they actually can’t think about it any other way apart from human.

    • @stoly@lemmy.world
      link
      fedilink
      English
      15 months ago

      I manage computing for a large university. One of my recently graduated students told me that he thought that technology just worked until he worked for me and saw the problems that come up. He was already a very tech-aware person and is going for a PhD in Infomatics, so if even he didn’t understand this, then what can we expect from the general public?

  • @disguy_ovahea@lemmy.world
    link
    fedilink
    English
    126 months ago

    I don’t just want AI news to fail, I want it to take the web-scraping trending post news bots down with it.

    Bring investigative journalists back to news media.

  • AutoTL;DRB
    link
    English
    16 months ago

    This is the best summary I could come up with:


    Originality costs money, but Gasuras started running her work through other AI detectors before submitting to make sure she wasn’t getting dinged by mistake.

    When ChatGPT set the world on fire a year and a half ago, it sparked a feverish search for ways to catch people trying to pass off AI text as their own writing.

    Alex Cui, chief technology officer for the AI detection company GPTZero, said detectors have meaningful shortcomings, but the benefits outweigh the drawbacks.

    Mark, another Ohio-based copywriter who asked that we withhold his name to avoid professional repercussions, said he had to take work doing maintenance at a local store after an AI detector cost him his job.

    It’s true that the internet is being flooded by low-effort content farms that pump out junky AI articles in an effort to game search results, get clicks, and make ad money from those eyeballs.

    For example, Gillham said “we advise against the tool being used within academia, and strongly recommend against being used for disciplinary action.” He explained the risk of false positives is too high for students, because they submit a small number of essays throughout a school year, but the volume of work produced by a professional writer means the algorithm has more chances to get it right.


    The original article contains 2,024 words, the summary contains 214 words. Saved 89%. I’m a bot and I’m open source!