Google researchers have come out with a new paper that warns that generative AI is ruining vast swaths of the internet with fake content — which is painfully ironic because Google has been hard at work pushing the same technology to its enormous user base.

        • @deranger@sh.itjust.works
          link
          fedilink
          English
          24
          edit-2
          5 months ago

          I disagree, you can completely avoid both FB and Twitter. You’re not going to find any valuable info on either, except maybe some cutting edge current event things on Twitter. If either of them were completely wiped tomorrow my life would be unchanged.

          Google on the other hand used to be a great resource for finding good info, but that’s ruined and getting worse as we speak. I’d argue its decline is significantly more impactful than what’s happened to FB/Twitter. It impacts me on a daily basis.

  • andrew_bidlaw
    link
    fedilink
    English
    325 months ago

    LLM is the insanely productive content creator. We can’t say how much of the web is generated by it at any moment (and that’s ignoring older copypaste articles), but the organic material one wants to prioritise in machine learning gets significantly reduced. This tech, if not isolated from it’s learning material, is predictably falling into a feedback loop, and at each cycle it is going to get worse.

    Surprisingly, pre LLM-boom datasets can probably become more valuable than contemporary ones.

    • @Naz@sh.itjust.works
      link
      fedilink
      English
      75 months ago

      I remember reading that from 2021-2023, LLMs generated more text than all humans had published combined - so arguably, actually human generated text is going to be a rarity

  • @flop_leash_973@lemmy.world
    link
    fedilink
    English
    265 months ago

    This person already has a vague sounding meeting on their calendar from an HR rep, their supervisor, and maybe a VP. To align vision and expectations.

  • @atrielienz@lemmy.world
    link
    fedilink
    English
    12
    edit-2
    5 months ago

    Ironic.

    ““Manipulation of human likeness and falsification of evidence underlie the most prevalent tactics in real-world cases of misuse,” the researchers conclude. “Most of these were deployed with a discernible intent to influence public opinion, enable scam or fraudulent activities, or to generate profit.””

    Who could have seen that coming? But in all seriousness, this is exactly why so many people have been so vehemently opposed to generative AI. It’s not because it can’t be useful. It’s literally because of how it is actively being used.

  • @yggstyle@lemmy.world
    link
    fedilink
    English
    115 months ago

    “Google looking for a different guy to blame for search enshitifiaction when internal documents point to them ruining it long before ai was a problem”