AI generated content, which now includes incredibly convincing videos of people, will grow exponentially over the next weeks, months, and years.

At some point, the majority of the content you see will be fake, and any usefulness or connection to humans will be lost.

Even information that you might have previously been able to confirm from a trusted source can (and will) be manipulated in some way, making verification impossible.

This lack of verification, along with the speed at which fake content can now be generated, will make it impossible to defend against.

Even the world of art and communication has been tainted, serving no connection to real people through this digital hellscape.

To that end, when will the internet be so untrustworthy, “soulless”, and useless to you that it crosses the tipping point?

EDIT: Ok, holy fuck. There’s actually a term for what I’m describing: “The Dead Internet Theory”

  • throwawayacc0430@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    7 hours ago

    Never.

    It’s still usable for pirating books and movies and tv.

    I’d just have to ignore most “user-generated” content.

    Dead Internet hypothesis is only applicable to user-generated content platforms.

    • Showroom7561@lemmy.caOP
      link
      fedilink
      arrow-up
      1
      ·
      6 hours ago

      I’d just have to ignore most “user-generated” content.

      Dead Internet hypothesis is only applicable to user-generated content platforms.

      AI will affect far more than just social media shit posts, though.

      All news (local, national, global). Educational sites. Medical information. Historical data. Almanacs/encyclopedias. Political information. Information about local services (i.e. outages). Interviews. Documentaries.

      I mean, all of these can be easily manipulated now in any medium. It’s only a matter of how quickly AI saturates these spaces.

      Trustworthy sources will few and far between, drowned out by content that can be generated thousands of times faster than real humans can.

      What then?

      I even worry about things like printed encyclopedias being manipulated, too. We stand to lose real human knowledge if AI content continues to accelerate.

      There really aren’t viable alternatives to those things, unless they are created (again), like before the internet was used for everything.

      • throwawayacc0430@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 hours ago

        The Only Solution is to support the How To Basic Youtube Channel, the last base of the resistance.

        Transcript

        Hi my name is Michael Stevens.

        You may know me as the creator and host of the VSauce 1 on YouTube on December 8, 2011 I created the how to basic YouTube channel. I created it as what I believe to be Step 1 in an important human revolution. As I looked around at what technology was doing to u, I realized that we were offloading information and skills to machines. You no longer have to know how to, fix a dented car, how to make an apple pie, you could just… “Google It”. The human mind was being replaced by machines, and once that replacement is finished… Humanity’s gone. I thought warning people would be enough, but then I realized… it was too late… Only a revolution that tore down the infrastructure of technology in our world would be sufficient. And I could only do that from the inside. I needed to upload DIY informational and educational content full of misinformation and absurdist comedy. That way, the system would fall apart. People wouldn’t trust machines, and we would all have to trust ourselves.

        /jk Of couse this is just a joke. OR IS IT?

        Join The Revolution!!!

    • faythofdragons@slrpnk.net
      link
      fedilink
      English
      arrow-up
      5
      ·
      11 hours ago

      God bless the grass that grows through the crack, they roll the concrete over it and try to keep it back. The concrete gets tired of what it has to do, it breaks and it buckles, and the grass grows through.

      -Malvina Reynolds

  • vane@lemmy.world
    link
    fedilink
    arrow-up
    14
    ·
    edit-2
    12 hours ago

    As long as you can do messaging / video / voice chat, do work, taxes and groceries over cable, internet will be here. Everything else is called entertiment and it’s optional. You can as well play games or watch movies or read books or listen to music instad of watching news and nothing will hapen because it’s just another type of entertiment at this point.

    If you’re scared of music or movies generated by AI listen to music and watch movies produced before year 2020. That’s it. You won’t have enough time in your life to experience all of content humanity created to this point no matter how much you will try.

  • hperrin@lemmy.ca
    link
    fedilink
    English
    arrow-up
    69
    ·
    17 hours ago

    There will always be areas online with real people. Namely, my irl friends.

    • Showroom7561@lemmy.caOP
      link
      fedilink
      arrow-up
      9
      arrow-down
      2
      ·
      17 hours ago

      Yes, of course. I’m not talking about that.

      Even here, Lemmy. How long before the replies you get are from bots, and you’re posting for bot users? Will there even be a point to continue wasting time on that?

      When you see news being reported, at some point, you’ll have no idea what’s real or fake. And it will be so ubiquitous that you’ll need to spend a considerable amount of time to even attempt to verify whether it’s true or trustworthy. At what point will you simply stop paying attention to it?

      • hperrin@lemmy.ca
        link
        fedilink
        English
        arrow-up
        20
        ·
        17 hours ago

        I don’t think I’ll ever stop paying attention. We’re already surrounded by conmen. We’re just automating those conmen. And we’ve had fake news forever.

        • Showroom7561@lemmy.caOP
          link
          fedilink
          arrow-up
          5
          ·
          16 hours ago

          And we’ve had fake news forever.

          Yes, limited in their scope.

          The fake news of yesterday still needed real people to spread disinformation. Fake news of tomorrow will have convincing videos, photos, “verified sources”, interviews, article upon article expanding on the disinformation, and millions of bots to promote the content through social media and search.

          “Fake” will be real enough to have catastrophic effects on actual people.

          It’s like going from spitting wads of tissue out the end of a straw to dropping hydrogen bombs. We aren’t prepared for what’s to come in the landscape of fake news.

          • hperrin@lemmy.ca
            link
            fedilink
            English
            arrow-up
            13
            ·
            16 hours ago

            What you’re describing already exists in the conservative media ecosystem. And yeah, we weren’t prepared for it. That’s why the president of the united states is a septuagenarian felon.

              • blackbelt352@lemmy.world
                link
                fedilink
                arrow-up
                5
                ·
                15 hours ago

                AI is just one more tool in the arsenal of propaganda for them. People in power haven’t even needed computers and advanced technology to get people to commit atrocities, we’ve been doing that for centuries.

              • DeceasedPassenger@lemmy.world
                link
                fedilink
                arrow-up
                3
                ·
                15 hours ago

                So yes, the landscape is changing drastically. As it has done before. I believe it’s possible for us to adapt when that happens, as we have before. We’re not yet at the point you describe so I couldn’t say how it’ll happen. But I have a blind faith that it’s at least possible for us. This doesn’t mean we should relax and wait for it to happen, but I would rather think about solutions than “oh no we’re fucked nooo”.

  • leave_it_blank@lemmy.world
    link
    fedilink
    arrow-up
    9
    ·
    12 hours ago

    The internet is already, to a huge extent, untrustworthy and soulless.

    When I think back to the 2000s, it was wild west. And it was fun. Not efficient but adventurous fun.

    At this point today I don’t care anymore. Give me the items and information I need, anything else like Lemmy is just a small bonus.

  • quediuspayu@lemmy.world
    link
    fedilink
    arrow-up
    30
    ·
    17 hours ago

    I see more of a future with a parallel internet similar to the dark web and fragmented local mesh networks on one side, and the other side corporate slop internet.

  • MagicShel@lemmy.zip
    link
    fedilink
    English
    arrow-up
    8
    ·
    14 hours ago

    I think there are going to be tools to identify networks of people and content you don’t want to interact with. This website is pushed by that social media account, which is boosted by these 2000 account that all exhibit bot-like behavior? Well let’s block the website, of course, but also let’s see who else those 2000 bots promote; let’s see who else promotes that website.

    The people identified as part of that web will either be bots, disingenuous actors (trolls, state-sponsored propaganda, etc), or gullible people pushing bullshit they have given no thought to understanding.

    I think the internet might just get better in the future, rather than worse. But we’ll see.

    • Showroom7561@lemmy.caOP
      link
      fedilink
      arrow-up
      3
      ·
      13 hours ago

      I think there are going to be tools to identify networks of people and content you don’t want to interact with. This website is pushed by that social media account, which is boosted by these 2000 account that all exhibit bot-like behavior? Well let’s block the website, of course, but also let’s see who else those 2000 bots promote; let’s see who else promotes that website.

      In an ethical, human-first world, that would be the case.

      Do you think that social media platforms, who run on stealing attention from users so they can steal their private data and behaviour history, would want to block content that’s doing exactly that? No way. Not ever.

      And the incentive to make easy money drives users, who otherwise wouldn’t have the skill or talent to be able to create and present content, to type in a prompt and send it as a post… over and over, automated so no effort at all needs to be made. Do this a million times over, and there’s no way to avoid it.

      And once we get to the point where AI content can be generated on-the-fly for each doom-scrolling user based on their behaviour on the platform, it’s game over. It’ll be like digital meth, but disguised to look funny/sexy/informant/cute/trustworthy.

      I’m using tools to blacklist AI sites in search, but the lists aren’t keeping up, and they don’t extend beyond search.

      There will come a point, probably very soon, where companies will figure out how to deliver ads and AI content as if it were from the original source content, which will make it impossible to block or filter out. It’s a horrific thought, TBH.

      • MagicShel@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        13 hours ago

        And once we get to the point where AI content can be generated on-the-fly for each doom-scrolling user based on their behaviour on the platform, it’s game over.

        Only if people want what AI is making. I’ve been using LLMs for about 5 years. I’ve been integrating them into a project for about 3. And I don’t think anyone is going to find AI generated slop entertaining. I have played with generating text, images, music, and once you get over the novelty it wears thin really quickly.

        If you fill someone’s feed with that stuff, they are going to leave over time. But I mean AI isn’t even that concerning to me. I’ve been thinking about this social trust graph tool for a decade. Social media has been overwhelmingly garbage at least that long.

        I’m using tools to blacklist AI sites in search, but the lists aren’t keeping up, and they don’t extend beyond search.

        Crowd source that. Plug a blocklist into a pi-hole and open it up for contribution.

        There will come a point, probably very soon, where companies will figure out how to deliver ads and AI content as if it were from the original source content, which will make it impossible to block or filter out.

        If they do, it will also be impossible for them to track and thus get paid for.

        The internet is largely self-healing. I mean I might have preferred it 35 years ago, and I’m not saying things are great, but you sound like you’re spiraling a bit and I just want you to know things will be alright. I’m way more worried about Trump then AI on the internet.

        • bampop@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          20 minutes ago

          One development we may see imminently is the infiltration of any areas of the internet not currently dominated by AI slop. When AI systems are generally able to successfully mimic real users, the next step would be to flood anything like Lemmy with fake users, whose purpose is mainly to overwhelm the system while avoiding detection. At the same time they could deploy more obvious AI bots. Any crowdsourced attempt at identifying AI may find many of its contributors are infiltration bots who gain trust by identifying and removing the obvious bots. In this way any attempt at creating a space not dominated by AI and controlled misinformation can be undermined

  • daniskarma@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    12
    ·
    edit-2
    16 hours ago

    I seriously don’t think that would be what happens.

    Things that have gone shit with AI were things that were previously shit anyway. Unpaid intern, bot farms, AI… All the same. I don’t think it would be much of an issue. I’m more worry about the ending of free adless internet. That were I’ve been seeing more and more of a decline.

    And probably some fuckers with the excuse of “AI threat” will start to put golden walls around some spaces.

    My red line is that I don’t pay for things that should be free. If most of internet became paywalled I suppose I would have to live with all the data I’ve hoarded over the years.

    Though I suspect there will always be a free internet.

  • dan69@lemmy.world
    link
    fedilink
    arrow-up
    19
    ·
    17 hours ago

    Quitting is not an option! Nor should it be. New ways to flag and call out “A1” crap should be there. It could be just a phase a lot like societies trends.

    • akilou@sh.itjust.works
      link
      fedilink
      arrow-up
      18
      ·
      17 hours ago

      No only is quitting not an option but people who don’t have access to broadband Internet at home or a smart phone or unlimited data are increasingly marginalized. What to read our menu, scan this QR code. Pay for parking? Use our app. Attend a public meeting? Click here to register for the Zoom

  • reksas@sopuli.xyz
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    13 hours ago

    I have pretty much just 3 websites I use regularly (here, youtube, twitch) and some random ones I look up if i need something specific. If I couldn’t block ads anymore, I dont think i’ll keep using youtube. I don’t expect anything meaningful regarding other people outside non-corporate services anyway.

  • anon6789@lemmy.world
    link
    fedilink
    arrow-up
    10
    ·
    16 hours ago

    We just collectively need to improve vetting sources. It’s something we can do individually, or collectively through moderation.

    I mainly just share pics here, but I do try to give a decent chunk of educational content as well. I take what I share seriously, because I want it treated seriously at times. I’m honest I’m not an expert, just a hobbyist. I always include sources or share if it’s something from my personal limited experience. I try to verify things from at least 2 sources before sharing things if it’s a new source. I always try to be clear if I’m hypothesizing about something and I’m not certain of it.

    It’s probably taking my content more seriously than necessary, but I take pride in what I post and I want to be seen as a trusted person in the community.

    I think the last few years have made it clear to anyone capable of understanding that we can no longer just take people at their word without some process of establishing trust. Like anything else, we can wait for someone else to fix it, or we can up our own games, on both providing and receiving information.

    • Showroom7561@lemmy.caOP
      link
      fedilink
      arrow-up
      2
      ·
      16 hours ago

      It’s probably taking my content more seriously than necessary, but I take pride in what I post and I want to be seen as a trusted person in the community.

      Plot twist: How do I know you aren’t a bot? /s

      As information multiplies, and people have less time to apply critical thinking or skepticism to what they see, we’ll have an extremely hard time sorting through it all.

      Even if we had a crowdsourced system that generates a whitelist of trusted sites, bots could easily overwhelm such a system, skewing the results. If Wikipedia, for example, had bots tampering with the data at a million times the rate that it does now, would anyone still want to use it?

      One option might be an invite-only system, and even that would create new problems with fragmentation and exploitation.

      Trust is almost becoming a thing of the past because of unprecedented digital threats.

      • anon6789@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        16 hours ago

        Work does suck the life out of me, but I’d hope I can still pass a Turing test! 😜

        There’s always going to be people that value facts and knowledge and they will always find each other for their own sanity.

        With a focus on animal stuff, there is a lot of AI that I come across to try to win cuteness karma. I do see some convincing looking things that make me do a double take, because animals can do some weird things after all, but some stuff is never going to be physically possible. Some color combos just don’t exist. It may take a more trained eye to spot things, but there’s still going to be people calling stuff out and there will be forums where things will get pulled down if they’re not real.

        In that regard, I worry about some real things being lost, at least to view to some of the general public, where real things that can’t be verified get downvoted/taken down/etc. But those with real interest will still work to conclusively verify or disprove things of questionable value.

        People just want truth to get out. Whether you’re interested in education or conspiracy, from whichever direction most of us approach things, we just want to know the truth to the best of our abilities. That does bring inherent troubles and creates avenues to poison the well, but as hard as the bad actors will work, the good actors will be working to clean it just as hard.

        ETA:

        Trust is almost becoming a thing of the past because of unprecedented digital threats.

        I also encourage people to question me. I’m happy to be able to confirm things, because I want you to also learn what I have learned, because I found it cool enough to study and share with you already. Questioning what I present to you also leads me to learn about more things, exploring subject matter I wouldn’t have thought to pursue on my own, or to finally learn about something I’ve been meaning to get to. Someone questioning my knowledge is both an opportunity for me to teach and to learn. And if I was wrong, hopefully afterwards I will know what is correct, and that has strengthened me as a whole if I accept I was wrong and have learned from the experience and not acted immaturely about it.