• technocrit@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    8
    ·
    edit-2
    13 hours ago

    Why would I ever want to use an operating system that’s been refined and optimized (aka Linux) when I could use huge, wasteful, statistics app prone to extreme error?

  • teft@piefed.social
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    1
    ·
    1 day ago

    And I want my cock sucked 24/7 but we don’t always get what we want.

  • stabby_cicada@slrpnk.net
    link
    fedilink
    arrow-up
    70
    ·
    edit-2
    1 day ago

    Just to make sure I understand: the amazing new capability of ChatGPT is the ability to open a different app and use it for you.

    So instead of using the app directly you’re paying a middleman to use the app for you.

    … Now that I type it out, I can see a ready-made audience among tech illiterate older people.

    • hazel@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      23
      ·
      edit-2
      1 day ago

      And younger people. We now have a sizable population of young adult technology users who have never known an operating system that wasn’t full of ads, obfuscated cloud storage, paywalled everything and apps that won’t run without an Internet connection. Adoption of an AI interface like this would seem intuitive and rational to these users, as they haven’t really experienced agency within their tech environment to begin with.

      • stabby_cicada@slrpnk.net
        link
        fedilink
        arrow-up
        8
        ·
        18 hours ago

        I keep being reminded of how big tech deliberately stunted younger users’ abilities in order to keep them locked in Apple’s or Android’s walled gardens.

        And I keep forgetting because it makes me so mad I don’t want to think about it :/

      • kadu@scribe.disroot.org
        link
        fedilink
        arrow-up
        18
        ·
        1 day ago

        My young students can’t unzip a file or navigate to a folder that’s inside too many other folders in a hierarchy

  • GreenKnight23@lemmy.world
    link
    fedilink
    arrow-up
    16
    ·
    1 day ago

    this has been a trend I have noticed over the last decade. it’s gotten considerably worse over the last 5 years.

    I am the user. Let ME make my decisions. Let ME take action. Stop fucking doing shit for me. I fucking know exactly what I want to do damn it.

    • ɯᴉuoʇuɐ@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      11
      ·
      1 day ago

      Why would investors care about whether he’s bullshitting either, if he’s making them money? They just have to sell before it all tanks.

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    27
    ·
    edit-2
    1 day ago

    Altman said. “It has never been faster to go from idea to product. You can really feel the acceleration at this point.”

    Lol, do we take inspiration from the acceleration his product experiences these days? Or shall we talk about the idea of his product becoming profitable one day? I mean congratulations for having a fast product, but I’d prefer opinions from people with successful and profitable ones?!

    • WhiteOakBayou@lemmy.world
      link
      fedilink
      arrow-up
      19
      ·
      1 day ago

      The advice you want from him is how to convince people you’ll be profitable before you run out of runway despite all evidence to the contrary. Whether or not the product makes money does not seem germane to whether or not Sam Altman gets paid.

    • FigMcLargeHuge@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      ·
      23 hours ago

      Can we start with the product actually working? I had some people on a team pinging me telling me they were getting an error message and how to fix said error message, which after a little investigation was not only wrong, it was horrifically wrong in its analysis and supposed solution of the issue. I did further digging, and of course the team had used chatgpt to come up with what I should do. Upon further digging, I saw what was going on, chatgpt had found and spit out the solution for an error message that was “close” to the one that we were getting from the software, but since chatgpt isn’t a human, it just has to go with what it has been programmed, and evidently it’s just spit out whatever you find that’s close. It really is some fucking annoying software and the fact that the press and others ran with this whole ‘artificial intelligence’ name still pisses me off. Just putting in more if/then statements doesn’t make your software ‘intelligent’ and they have obviously been feeding it everything whether it’s true or not and with no actual vetting of said data. Back when we had physical encyclopedias and even digital ones, you could at least trust that the information had been researched. These guys can’t even get that right.

      • hendrik@palaver.p3x.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        20 hours ago

        I guess it’s working to specification, though. ChatGPT is supposed to generate plausible text, being correct is more a nice side-effect we get on occasion. I’ve had similar experiences. I mean sometimes it gives good advice and tells people where to look and what to do. Other times it makes something look easy and doable for a non-technical person. The steps are nicely written down, but are completely fabricated and none of it works that way.

        I’ve had people approach me with that. It’s always a letdown for them if I tell them it’s entirely unusable. It is a great opportunity to teach about AI, though. And its limitations with factual stuff. Another area where this surfaces is Github issues of Free Software projects. We sometimes get wild bug reports or pull requests written by AI. Of course it mostly wastes everyone’s time as AI isn’t “intelligent” enough to deal with the complexity involved. It’d be way better if people wrote what they saw and what they’re dealing with.

        And by the way, I think the term “Artificial Intelligence” is from 1955 already. It’s not like journalists jumped on it recently. But it is a broad field, we have things like expert systems and formal reasoning. I think those definitely fit, as logic and reasoning has something to do with intelligence. Modern chatbots and LLMs are the same field, it’s just that the “artificial” aspect is very pronounced. And due to the realism of the output text, it’s easy to mistake them for proper intelligence, or forget about the nature of the thing…

        I always advocate for making watermarking of AI text a law. I can’t stop people from using AI, but I kind of need to be able to tell whether a bugreport is genuine or made up.

        • FigMcLargeHuge@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          14 hours ago

          Great reply. As far as the term, I am just a little salty with media and how they really ran with the term ‘drone’ when they were talking about quadcopters. They did this in the name of stirring up controversy with the general public and inducing a manufactured fright. Modern media’s manipulation of language to their advantage is another topic…

  • besselj@lemmy.ca
    link
    fedilink
    arrow-up
    24
    ·
    1 day ago

    They’re just flailing. Coming up with one idea after the next to keep the VC investments coming.