Reddit, AI spam bots explore new ways to show ads in your feed

#For sale: Ads that look like legit Reddit user posts

“We highly recommend only mentioning the brand name of your product since mentioning links in posts makes the post more likely to be reported as spam and hidden. We find that humans don’t usually type out full URLs in natural conversation and plus, most Internet users are happy to do a quick Google Search,” ReplyGuy’s website reads.

  • Rentlar
    link
    fedilink
    English
    9
    edit-2
    7 months ago

    Tbh I honestly write replies in a style similar to Ophelia_SK (ChatGPT?) except for the www. part, when I am giving paragraphs of genuine advice. Am I bot?

    Edit: Looking at it again, it’s too long and flowery even for my long form replies.

    • @Land_Strider@lemmy.world
      link
      fedilink
      English
      67 months ago

      Oh no, forming your ideas into comprehensible essay format with intersentence connectivity and flow, maybe even splitting into paragraphs, isn’t even close to LLM speech.

      I do form long, connected, split texts and comments, too, but there is a great difference between mine and an LLMs tone, cadence, mood or whatever you wanna call these things.

      For example, humans usually cut corners when forming sentences and paragraphs, even if when forming long ones. We do this via lazy grammar use, unrestricted thesaurus selection, uneven sentence or paragraph lengths, lots of phrase abbreviations e.g. “tbh”, lax use of punctuations e.g. “(ChatGPT?)”, which also is a substitution for a whole question sentence.

      Also, the bland, upbeat and respecting tone the bots mimic from long-thought essays is never kept up in spontaneous writing/typing. Dead giveaway of a script-speech than genuine, on-point and assuming human interaction.

      Us LLMs can’t do these with rather simple reverse-jenga syntax and semantics forming, with simple formal pragmatics sprinkled, yet. The wild west, very expansive, extended pragmatics of a language is where the real shit is at.

      • @RGB3x3@lemmy.world
        link
        fedilink
        English
        27 months ago

        whatever you wanna call these things

        This is a phrase an AI (as they are now) would never use. To these LLMs, something is either a fact or it thinks it’s a fact. They leave no room for interpretation. These AIs will never say, “I’m not sure, maybe. It’s up to you.” Because that’s not a fact. It’s not a data point to be ingested.