• deadcream
    link
    fedilink
    -9
    edit-2
    1 year ago

    It’s just projection of the hate for techbros (especially celebrities like Musk). Everything that techbros love (crypto, ai, space, etc) is hated automatically.
    I.e. they don’t really hate AI. You can’t hate something if you have zero understanding what that something is. It’s just an expression of hate for someone who promotes that something.

    • chaogomu
      link
      fedilink
      61 year ago

      AI is not good. I want to be good, but it’s not.

      I’ll clarify, it’s basically full of nonsense. Half of the shit it spits out is nonsense, and the rest is questionable. Even with that, it’s already being used to put people out of their jobs.

      Techbros think AI will run rampant and kill all humans, when they’re the ones killing people by replacing them with shitty AI. And the worst part is that it isn’t even good at the jobs it’s being used for. It makes shit up, it plagiarizes, it spits out nonsense. And a disturbing amount of the internet is starting to become AI generated. Which is also a problem. See, AI is trained on the wider internet, and now AI is being trained on the shitty output of AI. Which will lead to fun problems and the collapse of the AI. Sadly, the jobs taken by AI will not come back.

      • @Aceticon@lemmy.world
        link
        fedilink
        English
        4
        edit-2
        1 year ago

        It’s a tool which can be used to great effect in the right setting, for example to wrap cold knowledge summarily stated into formats with much broader appeal and to revert the process.

        However it’s being sold by greedy fuckers who stand to gain from people jumping into the hype-train as something else altogether: a shortcut into knowledge and the output of those who have it, because there’s a lot more money to be made from that than there is of something which can “write an article from a set of bullet points”.

        For me the most infuriating aspect of this is that this is hardly the 1st such hype train going to “FleeceTheSuckersTown” coming out of “TechBrosCity” that we’ve seen in the last 2 decades, not even the 2nd or the 3rd - there have been a lot of such things always following the same formula, to the point that the “great men” of the age in Tech (such as Musk) are, unlike the ones in the first Tech boom (that ended in 2000), people who repeatedly used this kind of thing to make themselves rich by fleecing suckers, not makers.

      • @_danny@lemmy.world
        link
        fedilink
        English
        31 year ago

        It’s definitely gone down hill recently, but at the launch of gpt4 it was pretty incredible. It would make several logical jumps that a lot of actual people probably wouldn’t make. I remember my “wow moment” was asking how many M&M’s would fit in a typical glass milk jug, and then I measured it myself (by weight) and got an answer about 8% off. It gave measurements and cited actual equations. I couldn’t find anything through Google that solved the same problem or had the same answer that it could have just copied. It was supposed to be bad at math, but gpt4 got those types of problems pretty much spot on for me.

        I think that most people who have tried the latest AI models have had a bad experience because its power is distributed over more users.

        • chaogomu
          link
          fedilink
          21 year ago

          There’s also the issue of model collapse, when the AI is trained on data generated by AI, the errors and hallucinations start to compound until all you have left is gibberish. We’re about halfway there.

          • FaceDeer
            link
            fedilink
            31 year ago

            ChatGPT is trained on data with a cutoff in September 2021. It’s not training on AI-generated data.

            Even if some AI-generated data is included, as long as it’s reasonably curated and it’s mixed with non-AI data model collapse can be avoided.

            “Model collapse” is starting to feel like just a keyword for “this AI isn’t as good as I wanted.”

          • @_danny@lemmy.world
            link
            fedilink
            English
            21 year ago

            I feel like you’re undereducated on how and when AI models are trained. Especially for the gpt model, it’s not “constantly learning” like other models. It’s being tweaked in discreet increments by developers trying to cover their ass, and get it to less frequently say things they can be sued for.

            Also, AI are already training other AI, that’s kinda how AI are made… There’s an AI that detects how well a given phrase follows another phrase, and that’s used to train the part of the AI you interact with. (arguably they are part of the same whole, depending on how you view the architecture)

            CGP gray has a good into video on how bots learn, it’s pretty outdated and not really applicable to how LLMs learn, but the general idea is still there.

    • @aesthelete@lemmy.world
      link
      fedilink
      English
      21 year ago

      Not everyone that dislikes a thing or the promoters of that thing “have no idea what it is”…but sure, go off I guess. 🤷