I used to be the Security Team Lead for Web Applications at one of the largest government data centers in the world but now I do mostly “source available” security mainly focusing on BSD. I’m on GitHub but I run a self-hosted Gogs (which gitea came from) git repo at Quadhelion Engineering Dev.

Well, on that server I tried to deny AI with Suricata, robots.txt, “NO AI” Licenses, Human Intelligence (HI) License links in the software, “NO AI” comments in posts everywhere on the Internet where my software was posted. Here is what I found today after having correlated all my logs of git clones or scrapes and traced them all back to IP/Company/Server.

Formerly having been loathe to even give my thinking pattern to a potential enemy I asked Perplexity AI questions specifically about BSD security, a very niche topic. Although there is a huge data pool here in general over many decades, my type of software is pretty unique, is buried as it does not come up on a GitHub search for BSD Security for two pages which is all most users will click, is very recent comparitively to the “dead pool” of old knowledge, and is fairly well recieved, yet not generally popular so GitHub Traffic Analysis is very useful.

The traceback and AI result analysis shows the following:

  1. GitHub cloning vs visitor activity in the Traffic tab DOES NOT MATCH any useful pattern for me the Engineer. Likelyhood of AI training rough estimate of my own repositories: 60% of clones are AI/Automata
  2. GitHub README.md is not licensable material and is a public document able to be trained on no matter what the software license, copyright, statements, or any technical measures used to dissuade/defeat it. a. I’m trying to see if tracking down whether any README.md no matter what the context is trainable; is a solvable engineering project considering my life constraints.
  3. Plagarisation of technical writing: Probable
  4. Theft of programming “snippets” or perhaps “single lines of code” and overall logic design pattern for that solution: Probable
  5. Supremely interesting choice of datasets used vs available, in summary use, but also checking for validation against other software and weighted upon reputation factors with “Coq” like proofing, GitHub “Stars”, Employer History?
  6. Even though I can see my own writing and formatting right out of my README.md the citation was to “Phoronix Forum” but that isn’t true. That’s like saying your post is “Tick Tock” said. I wrote that, a real flesh and blood human being took comparitvely massive amounts of time to do that. My birthname is there in the post 2 times [EDIT: post signature with my name no longer? Name not in “about” either hmm], in the repo, in the comments, all over the Internet.

[EDIT continued] Did it choose the Phoronix vector to that information because it was less attributable? It found my other repos in other ways. My Phoronix handle is the same name as GitHub username, where my handl is my name, easily inferable in any, as well as a biography link with my fullname in the about.[EDIT cont end]

You should test this out for yourself as I’m not going to take days or a week making a great presentation of a technical case. Check your own niche code, a specific code question of application, or make a mock repo with super niche stuff with lots of code in the README.md and then check it against AI every day until you see it.

P.S. I pulled up TabNine and tried to write Ruby so complicated and magically mashed, AI could offer me nothing, just as an AI obsucation/smartness test. You should try something similar to see what results you get.

  • @catloaf@lemm.ee
    link
    fedilink
    English
    175 months ago

    I mean this in the best possible way, but have you ever had any mental health evaluations? I’m not sure if they’re still calling it paranoid schizophrenia, but the way you write makes me concerned.

    • AlexanderESmith
      link
      fedilink
      0
      edit-2
      5 months ago

      It’s not paranoia if you have proof that they’re stealing your content without permission or compensation.

      You come off as an AI bro apologist. What they’re doing isn’t okay.

      • @wizardbeard@lemmy.dbzer0.com
        link
        fedilink
        English
        145 months ago

        These concepts are not mutually exclusive. You can be right about AI considerably overstepping boundaries and still be exhibiting classic signs of paranoia issues, which OP is.

        Their immediate response to people not reacting to this post and their comments is to immediately jump to the idea that they’re being targeted by their designated enemy. That’s not particularly healthy.

        I’m worried that AI is becoming the new gangstalking for tech aligned people predisposed to disprdered thinking.

        • AlexanderESmith
          link
          fedilink
          65 months ago

          I agree that their replies are a little… over the top. That’s all kind of a distraction from the main topic though, isn’t it? Do we really need to be rendering armchair diagnoses about someone we know very little about?

          I mean, if I posted a legitimate concern - with evidence - and I was dog-piled with a bunch of responses that I was a nutter, I’d probably go on the defensive too. Some people don’t know how to handle criticism or stressful interactions, it doesn’t mean we should necessarily write them (or their verified concerns) off.

      • @catloaf@lemm.ee
        link
        fedilink
        English
        55 months ago

        Just because they are out to get you doesn’t mean you’re not paranoid, and vice versa.

        I have nothing for or against AI/ML as a tool, my issue with it is when companies scrape huge amounts of data in violation of the author’s rights, as in OP’s example. Although I’m not quite sure why he’s keeping code in the README.md file; usually that’s for basic installation and usage, and full examples are kept in full documentation. That said, I highly doubt README.md files are public domain, so they shouldn’t be automatically used as training materials.

        • AlexanderESmith
          link
          fedilink
          25 months ago

          I’m not quite sure who’s argument you’re making here. It reads like you agree with OP and I (e.g. “LLMs shouldn’t be using other people’s content without permission”, et al).

          But you called OP paranoid… I assumed because you thought OP thought their content was being used without their permission. And it’s extremely clear that this is what is happening…

          What am I missing?

      • @DudeDudenson
        link
        English
        25 months ago

        Frankly op replied to his own post multiple times with no prompting whatsoever, just reading through this stuff I’m concerned about him as well. LLM stuff not withstanding and even if he’s right he seems somewhat obsessed with this in an unhealthy way

    • Elias GriffinOP
      link
      fedilink
      English
      -21
      edit-2
      5 months ago

      I write the smartest in the room, passionate, with wisdom and evidence. The way you defame someone like this makes me definitely sure you are not afraid to defame someone’s character with no evidence of anything but your own stupidity and un-awareness.

      • @catloaf@lemm.ee
        link
        fedilink
        English
        185 months ago

        This is out of genuine concern, my dude. Your other comment accusing me of not being a real person is positively alarming.

        • Elias GriffinOP
          link
          fedilink
          English
          -14
          edit-2
          5 months ago

          Your rapacious backwards insult of caring is gross and obvious. You called me “my dude” like a teenger whose chill, and calm, and correct, but just …a child and wrong in the end. How old are you child? My Lemmy profile is my name with my Seal naturally born March 4th, 1974 as Elias Christopher Griffin. I’ve done more in my life than most people do in 10. My mental health is top 3% as is my intellect.

          You are an un-named rando lemmy account named “catloaf” who averages 16 posts a day for the past 4 months with no original posts of your own because you aren’t original.

          I make only original posts. You seem nothing like a real person. Want to tell us who you are? What makes you special, outside of the mandated counseling you recieve or data models you intake?

          You know what, no one takes what you say seriously loaf of cat, I certainly didn’t, don’t, and won’t. Here is space for your next hairball


          • subignition
            link
            fedilink
            95 months ago

            I take back the benefit of the doubt I gave in my earlier reply. This reply is as unhinged as the Navy SEAL copypasta. You need mental health support.

          • @DudeDudenson
            link
            English
            55 months ago

            This really reads like copy pasta, if someone told me you were an LLM configured to make antiAI people look bad I’d believe them

      • subignition
        link
        fedilink
        55 months ago

        I think your problem is here:

        You should test this out for yourself as I’m not going to take days or a week making a great presentation of a technical case.

        You’ve written a whole lot to try to be convincing but ultimately stopped short of actually proving what you’ve alleged. It looks to me you are frustrated that no one is taking you at your word and going down this rabbit hole themselves, when the various reputational elements you’re relying on are going to be important only to a minority of users. Burden of proof works how it always has, however.