• @cheese_greater@lemmy.world
    link
    fedilink
    English
    108
    edit-2
    1 year ago

    I would be in trouble if this was a thing. My writing naturally resembles the output of a ChatGPT prompt when I’m not joke answering or shitposting.

      • bioemerl
        link
        fedilink
        711 year ago

        Because you’re training a detector on something that is designed to emulate regular languages closest possible, and human speech has so much incredible variability that it’s almost impossible to identify if someone or something has been written by an AI.

        You can detect maybe your typical generic chat GPT type outputs, but you can characterize a conversation with chat GPT or any of the other much better local models (privacy and control are aspects which make them better) and after doing that you can get radically human seeming outputs that are totally different from anything chat GPT will output.

        In short, given a static block of text it’s going to be nearly impossible to detect if it’s coming from an AI. It’s just too difficult to problem, and if you’re going to solve it it’s going to be immediately obsolete the next time someone fine tunes their own model

        • stevedidWHAT
          link
          fedilink
          English
          61 year ago

          Yeah this makes a lot of sense considering the vastness of language and it’s imperfections (English I’m mostly looking at you, ya inbred fuck)

          Are there any other detection techniques that you know of? Wb forcing AI models to have a signature that is guaranteed to be indentifiable, permanent, and unique for each tuning produced? It’d have to be not directly noticeable but easy to calculate in order to prevent any “distractions” for the users.

          • @Grimy@lemmy.world
            link
            fedilink
            English
            181 year ago

            The output is pure text so you would have to hide the signature in the response itself. On top of being useless since most users slightly modify the text after receiving it, it would probably have a negative effect on the quality. It’s also insanely complicated to train that kind of behavior into an llm.

            • stevedidWHAT
              link
              fedilink
              English
              21 year ago

              Your implementation of my concept might be useless, but that doesn’t mean the concept is.

              One possible solution would be to look at how responses are structured, letter frequencies, etc. The flexibility/ambiguous nature natural language is that you can word things in many many different ways which allows for some creative meta techniques to accomplish a fingerprint.

              • Terrasque
                link
                fedilink
                English
                31 year ago

                It is a valid idea, and not impossible. When generating text, a language model gives a list of possible tokens… or more correctly it gives a weight to every possible token where most would be 0 weight. Then there’s multiple ways to pick the next token, from always picking top one to select random from top X tokens to mirostat and so on. You could probably do some extra weighting to embed a sort of signature. At some quality loss

              • Balder
                link
                fedilink
                English
                21 year ago

                The idea itself is valid, but wouldn’t that just make it more dangerous when malicious agents use the technology without fingerprinting?

                • stevedidWHAT
                  link
                  fedilink
                  English
                  11 year ago

                  Cats out of the bag my friend. Just like the nuke, the ideas are always out there. Once it’s been discovered and shared that’s that.

                  We can huff and puff and come up with all the cute little laws we want but the fact of the matter is we know the recipe now. All we can do is dive deeper into the technology to understand it even better, make new findings and adapt as we always do.

          • bioemerl
            link
            fedilink
            101 year ago

            forcing AI models to have a signature that is guaranteed to be indentifiable, permanent, and unique for each tuning produced

            Either AI remains entirely in the hands of fucks like open AI or this is impossible and easily removed. AI should be a free common use tool, not an extension of corporate control.

            • stevedidWHAT
              link
              fedilink
              English
              41 year ago

              Agreed, such power should belong to everyone or has yet to be discovered. Even Oppenheimer knew, once the cats out of the bag…

            • roguetrick
              link
              fedilink
              21 year ago

              Owning the means of AI production huh? I guess anarchists will win after all.

              • bioemerl
                link
                fedilink
                61 year ago

                It’s no different than owning your computer. Something is absolutely a central and productivity boosting is artificial intelligence should not be kept in the hands of the few.

                The only way that it could be is through government intervention, you don’t need an anarchist to be against an open AI monopoly.

      • @Eufalconimorph@discuss.tchncs.de
        link
        fedilink
        English
        221 year ago

        Because AIs are (partly) trained by making AI detectors. If an AI can be distinguished from a natural intelligence, it’s not good enough at emulating intelligence. If an AI detector can reliably distinguish AI from humans, the AI companies will use that detector to train their next AI.

        • stevedidWHAT
          link
          fedilink
          English
          -11 year ago

          I’m not sure I’m following your argument here - you keep switching between talking about AI and AI detectors. Each of the below are just numbered according to the order of your prior responses as sentences:

          1. Can you provide any articles or blog posts from AI companies for this or point me in the right direction?
          2. Agreed
          3. Right…

          I’m having trouble finding your support for your claim

          • TheHarpyEagle
            link
            fedilink
            English
            81 year ago

            See Generative Adversarial Network (GAN). Basically, making new AI detectors will always be harder than beating current ones. AI detectors have to somehow find a new “tell”, the target AI need only train itself on the output of the detector to figure out how to trick it.

          • @dack@lemmy.world
            link
            fedilink
            English
            71 year ago

            At a very high level, training is something like:

            • generate some output
            • give the output a score based on how much it looks like real human text
            • adjust the parameters slightly to improve the score
            • repeat

            Step #2 is also exactly what an “AI detector” does. If someone is able to write code that reliably distinguishes between AI and human text, then AI developers would plug it in to that training step in order to improve their AI.

            In other words, if some theoretical machine perfectly “knows” the difference between generated and human text, then the same machine can also be used to make text that is indistinguishable from human text.

            • stevedidWHAT
              link
              fedilink
              English
              3
              edit-2
              1 year ago

              Exactly right, I mentioned this in a comment elsewhere but basically we can’t have our cake and eat it too.

              We can’t have a perfect NL impersonator that can also be detected as not NL. (Best case, obviously things arent perfect for any AI model so technically detecting those mistakes could be used to help identify perhaps, but who’s to say what the FP rate would look like!)

              Ultimately the cat is out of the bag and I’m not quite sure there is anything we can do now. Ultimately some smart fingerprinting solution would be ideal but I just don’t know how feasible that would remain.

              Edit: source: I took a few 600 level ai classes in college and have made several of my own of varying types and what not

      • sebi
        link
        fedilink
        English
        -11 year ago

        Because generative Neural Networks always have some random noise. Read more about it here

          • @PetDinosaurs@lemmy.world
            link
            fedilink
            English
            51 year ago

            It almost certainly has some gan-like pieces.

            Gans are part of the NN toolbox, like cnns and rnns and such.

            Basically all commercial algorithms (not just nns, everything) are what I like to call “hybrid” methods, which means keep throwing different tools at it until things work well enough.

            • stevedidWHAT
              link
              fedilink
              English
              3
              edit-2
              1 year ago

              The findings were for GAN models, not GAN like components though.

              • @PetDinosaurs@lemmy.world
                link
                fedilink
                English
                11 year ago

                It doesn’t matter. Even the training process makes it pretty much impossible to tell these things apart.

                And if we do find a way to distinguish, we’ll immediately incorporate that into the model design in a GAN like manner, and we’ll soon be unable to distinguish again.

                • stevedidWHAT
                  link
                  fedilink
                  English
                  01 year ago

                  Which is why hardcoded fingerprints/identifications are required to identify the individual as a speaker rather than as an AI vs Human. Which is what we’re ultimately agreeing on here outside of the pedantics of the article and scientific findings:

                  Trying to find the model who is supposed to be human as an AI is counter intuitive. They’re direct opposites if one works, both can’t be exist in this implementation.

                  The hard part will obviously be making sure that such a “fingerprint” wouldn’t be removable which will take some wild math and out of the box thinking I’m sure.

                  Tough problem!

          • bioemerl
            link
            fedilink
            21 year ago

            It’s not even about diffusion models. Adversarial networks are basically obsolete

  • ReallyKinda
    link
    fedilink
    571 year ago

    I know a couple teachers (college level) that have caught several gpt papers over the summer. It’s a great cheating tool but as with all cheating in the past you still have to basically learn the material (at least for narrative papers) to proof gpt properly. It doesn’t get jargon right, it makes things up, it makes no attempt to adhere to reason when it’s making an argument.

    Using translation tools is extra obvious—have a native speaker proof your paper if you attempt to use an AI translator on a paper for credit!!

    • @SpikesOtherDog@ani.social
      link
      fedilink
      English
      141 year ago

      it makes things up, it makes no attempt to adhere to reason when it’s making an argument.

      It doesn’t hardly understand logic. I’m using it to generate content and it continuously will assert information in ways that don’t make sense, relate things that aren’t connected, and forget facts that don’t flow into the response.

      • @mayonaise_met@feddit.nl
        link
        fedilink
        English
        10
        edit-2
        1 year ago

        As I understand it as a layman who uses GPT4 quite a lot to generate code and formulas, it doesn’t understand logic at all. Afaik, there is currently no rational process which considers whether what it’s about to say makes sense and is correct.

        It just sort of bullshits it’s way to an answer based on whether words seem likely according to its model.

        That’s why you can point it in the right direction and it will sometimes appear to apply reasoning and correct itself. But you can just as easily point it in the wrong direction and it will do that just as confidently too.

        • @Aceticon@lemmy.world
          link
          fedilink
          English
          71 year ago

          It has no notion of logic at all.

          It roughly works by piecing together sentences based on the probability of the various elements (mainly words but also more complex) being there in various relations to each other, the “probability curves” (not quite probability curves but that’s a good enough analog) having been derived from the very large language training sets used to train them (hence LLM - Large Language Model).

          This is why you might get things like pieces of argumentation which are internally consistent (or merelly familiar segments from actual human posts were people are making an argument) but they’re not consistent with each other - the thing is not building an argument following a logic thread, it’s just putting together language tokens in common ways which in its training set were found associate with each other and with language token structures similar to those in your question.

    • @pc_admin@aussie.zone
      link
      fedilink
      English
      -28
      edit-2
      1 year ago

      Any teacher still issuing out of class homework or assignments is doing a disservice IMO.

      Of coarse people will just GPT it… you need to get them off the computer and into an exam room.

      • @SmoothLiquidation@lemmy.world
        link
        fedilink
        English
        391 year ago

        GPT is a tool that the students will have access to their entire professional lives. It should be treated as such and worked into the curriculum.

        Forbidding it would be like saying you can’t use Photoshop in a photography class.

        • @Neve8028@lemm.ee
          link
          fedilink
          English
          221 year ago

          It can definitely be a good tool for studying or for organizing your thoughts but it’s also easily abused. School is there to teach you how to take in and analyze information and chat AIs can basically do that for you (whether or not their analysis is correct is another story). I’ve heard a lot of people compare it to the advent of the calculator but I think that’s wrong. A calculator spits out an objective truth and will always say the same thing. Chat GPT can take your input and add analysis and context in a way that circumvents the point of the assignment which is to figure out what you personally learned.

          • @Benj1B@sh.itjust.works
            link
            fedilink
            English
            -71 year ago

            Where it gets really challenging is that LLMs can take the assignment input and generate an answer that is actually more educational for the student than what they learned d in class. A good education system would instruct students in how to structure their prompts in a way that helps them learn the material - because the LLMs can construct virtually limitless examples and analogies and write in any kind of style, you can tailor them to each student with the correct prompts and get a level of engagement equal to a private tutor for every student.

            So the act of using the tool to generate an assignment response could, if done correctly and with guidance, be more educational than anything the student picked up in class - but if its not monitored, if students don’t use the tool the right way, it is just going to be seen as a shortcut for answers. The education system needs to move quickly to adapt to the new tech but I don’t have a lot of hope - some individual teachers will do great as they always have, others will be shitty, and the education departments will lag behind a decade or two as usual.

            • @Neve8028@lemm.ee
              link
              fedilink
              English
              51 year ago

              Where it gets really challenging is that LLMs can take the assignment input and generate an answer that is actually more educational for the student than what they learned d in class.

              That’s if the LLM is right. If you don’t know the material, you have no idea if what it’s spitting out is correct or not. That’s especially dangerous once you get to undergrad level when learning about more specialized subjects. Also, how can reading a paper be more informative than doing research and reading relevant sources? The paper is just the summary of the research.

              and get a level of engagement equal to a private tutor for every student.

              Eh. Even assuming it’s always 100% correct, there’s so much more value to talking to a knowledgeable human being about the subject. There’s so much more nuance to in person conversations than speaking with an AI.

              Look, again, I do think that LLMs can be great resources and should be taken advantage of. Where we disagree is that I think the point of the assignment is to gain the skills to do research, analysis, and generally think critically about the material. You seem to think that the goal is to hand something in.

        • @MrMcGasion@lemmy.world
          link
          fedilink
          English
          91 year ago

          I’ve been in photography classes where Photoshop wasn’t allowed, although it was pretty easily enforced because we were required to use school provided film cameras. Half the semester was 35mm film, and the other half was 3x5 graphic press cameras where we were allowed to do some editing - providing we could do the edits while developing our own film and prints in the lab. It was a great way to learn the fundamentals and learning to take better pictures in the first place. There were plenty of other classes where Photoshop was allowed, but sometimes restricting which tools can be used, can help push us to be better.

        • ReallyKinda
          link
          fedilink
          61 year ago

          Depends on how it’s used of course. Using it to help brainstorm phrasing is very useful. Asking it to write a paper and then editing and turning it in is no different than regular plagiarism imo. Bans will apply to the latter case and the former case should be undetectable.

      • ReallyKinda
        link
        fedilink
        101 year ago

        Even in college? I never had a college course that allowed you to work on assignments in class

        • @Muffi@programming.dev
          link
          fedilink
          English
          11 year ago

          I studied engineering. Most classes were split into 2 hours of theory, followed by 2 hours of practical assignments. Both within the official class hours, so teachers could assist with the assignments. The best college-class structure by far imo.

  • @Nioxic@lemmy.dbzer0.com
    link
    fedilink
    English
    34
    edit-2
    1 year ago

    I have to hand in a short report

    I wrote parts of it and asked chatgpt for a conclusion.

    So i read that, adjusted a few points. Added another couple points…

    Then rewrote it all in my own wording. (Chatgpt gave me 10 lines out of 10 pages)

    We are allowed to use chatgpt though. Because we would always have internet access for our job anyway. (Computer science)

    • @TropicalDingdong@lemmy.world
      link
      fedilink
      English
      131 year ago

      I found out on the last screen of a travel grant application I needed a coverletter.

      I pasted in the requirements for the cover letter and what I had put in my application.

      I pasted the results in as the cover letter without review.

      I got the travel grant.

      • @learningduck@programming.dev
        link
        fedilink
        English
        51 year ago

        Typically for generative AI. I think during their training of the Nobel, they must have developed another model that detect if GPT produce a more natural language. I think that other model may reached the point where it couldn’t flag it with acceptable false positive.

  • @Boddhisatva@lemmy.world
    link
    fedilink
    English
    281 year ago

    OpenAI discontinued its AI Classifier, which was an experimental tool designed to detect AI-written text. It had an abysmal 26 percent accuracy rate.

    If you ask this thing whether or not some given text is AI generated, and it is only right 26% of the time, then I can think of a real quick way to make it 74% accurate.

      • @dartos@reddthat.com
        link
        fedilink
        English
        10
        edit-2
        1 year ago

        Looks like they got that number from this quote from another arstechnica article ”…OpenAI admitted that its AI Classifier was not “fully reliable,” correctly identifying only 26 percent of AI-written text as “likely AI-written” and incorrectly labeling human-written works 9 percent of the time”

        Seems like it mostly wasn’t confident enough to make a judgement, but 26% it correctly detected ai text and 9% incorrectly identified human text as ai text. It doesn’t tell us how often it labeled AI text as human text or how often it was just unsure.

        EDIT: this article https://arstechnica.com/information-technology/2023/07/openai-discontinues-its-ai-writing-detector-due-to-low-rate-of-accuracy/

        • @cmfhsu@lemmy.world
          link
          fedilink
          English
          2
          edit-2
          1 year ago

          In statistics, everything is based off probability / likelihood - even binary yes or no decisions. For example, you might say “this predictive algorithm must be at least 95% statistically confident of an answer, else you default to unknown or another safe answer”.

          What this likely means is only 26% of the answers were confident enough to say “yes” (because falsely accusing somebody of cheating is much worse than giving the benefit of the doubt) and were correct.

          There is likely a large portion of answers which could have been predicted correctly if the company was willing to chance more false positives (potentially getting studings mistakenly expelled).

    • @notatoad@lemmy.world
      link
      fedilink
      English
      41 year ago

      it seemed like a really weird decision for OpenAI to have an AI classifier in the first place. their whole business is to generate output that’s good enough that it can’t be distinguished from what a human might produce, and then they went and made a tool to try and point out where they failed.

      • @Boddhisatva@lemmy.world
        link
        fedilink
        English
        21 year ago

        That may have been the goal. Look how good our AI is, even we can’t tell if its output is human generated or not.

    • Max Demon
      link
      fedilink
      English
      121 year ago

      True -

      1. Write points/summary
      2. Have AI expand in many words
      3. Post
      4. Reader uses AI to generate summarize post preferably in points
      5. Profit??
  • @Matriks404@lemmy.world
    link
    fedilink
    English
    221 year ago

    Did human-generated content really become so low quality that it is distinguishable from AI-generated content?

  • HelloThere
    link
    fedilink
    English
    201 year ago

    Regardless of if they do or don’t, surely it’s in the interests of the people making the “AI” to claim that their tool is so good it’s indistinguishable from humans?

    • stevedidWHAT
      link
      fedilink
      English
      151 year ago

      Depends if they’re more researchers or a business imo. Scientists generally speaking are very cautious about making shit claims bc if they get called out that’s their career really.

      • HelloThere
        link
        fedilink
        English
        6
        edit-2
        1 year ago

        It’s literally a marketing blog posted by OpenAI on their site, not a study in a journal.

      • @BetaDoggo_@lemmy.world
        link
        fedilink
        English
        51 year ago

        OpenAI hasn’t been focused on the science since the Microsoft investment. A science focused company doesn’t release a technical report that doesn’t contain any of the specs of the model they’re reporting on.

      • @Zeth0s@lemmy.world
        link
        fedilink
        English
        4
        edit-2
        1 year ago

        Few decades ago probably, nowadays “scientists” make a lot of bs claims to get published. I was in the room when a “scientist” publishing several nature per year asked to her student to write a paper for a research without any result in a way that it looked like it had something important for a relatively good IF publication.

        That day I decided I was done with academia. I had seen enough.

    • pewter
      link
      fedilink
      English
      01 year ago

      Yes, but it’s such a falsifiable claim that anyone is more than welcome to prove them wrong. There’s a lot of slightly different LLMs out there. If you or anyone else can definitively show there’s a machine that can identify AI writing vs human writing, it will either result in better AI writing or it would be an amazing breakthrough in understanding the limits of AI.

      • HelloThere
        link
        fedilink
        English
        21 year ago

        People like to view the problem as a paradox - can an all powerful God create a rock they cannot lift? - but I feel that’s too generous, it’s more marking your own homework.

        If a system can both write text, and detect whether it or another system wrote that text, then “all” it needs to do is change that text to be outside of the bounds of detection. That is to say, it just needs to convince itself.

        I’m not wanting to imply that that is easy, because it isn’t, but it’s a very different thing to convincing someone else, especially a human, that understands the topic.

        There is also a false narrative involved here, that we need an AI to detect AI which again serves as a marketing benefit to OpenAI.

        We don’t, because they aren’t that good, at least, not yet anyway.

  • @irotsoma@lemmy.world
    link
    fedilink
    English
    181 year ago

    A lot of these relied on common mistakes that “AI” algorithms make but humans generally don’t. As language models are improving, it’s harder to detect.

    • Cethin
      link
      fedilink
      English
      141 year ago

      They’re also likely training on the detector’s output. That why they build detectors. It isn’t for the good of other people. It’s to improve their assets. A detector is used to discard some inputs it knows are written by AI so it doesn’t train on that data, which leads to it out competing the detection AI.

    • Turun
      link
      fedilink
      English
      151 year ago

      Or, because you can’t rely on computers to tell you the truth. Which is exactly the issue with LLMs as well.

        • Turun
          link
          fedilink
          English
          21 year ago

          I was mostly referring to the top comment. If you need to write an essay on Hamlet, the book can in fact not lie, because the entire exercise is to read the book and write about the contents of it.

          But in general, you are right. (Which is why it is proper journalistic procedure to talk to multiple experts about a topic you write about. Also a good article does not present a forgone conclusion, but instead let’s readers form their own opinion on a topic by providing the necessary context and facts without the author’s judgement. LLMs as a one-stop-shop do not provide this and are less reliable than listening to a single expert would be)

    • @SpaceCowboy@lemmy.ca
      link
      fedilink
      English
      21 year ago

      AI might democratize grifting. You no longer will have to have the resources that Russia and China have devoted to this kind of thing. Anyone will be able to generate vast amounts of fake inflammatory rhetoric.

      Then once there’s a 99.9% chance that the person you’re talking to on social media is an AI, people might realize how stupid it is to believe anything they read on the internet.

  • AutoTL;DRB
    link
    English
    31 year ago

    This is the best summary I could come up with:


    In a related FAQ, they also officially admit what we already know: AI writing detectors don’t work, despite frequently being used to punish students with false positives.

    In July, we covered in depth why AI writing detectors such as GPTZero don’t work, with experts calling them “mostly snake oil.”

    That same month, OpenAI discontinued its AI Classifier, which was an experimental tool designed to detect AI-written text.

    Along those lines, OpenAI also addresses its AI models’ propensity to confabulate false information, which we have also covered in detail at Ars.

    “Sometimes, ChatGPT sounds convincing, but it might give you incorrect or misleading information (often called a ‘hallucination’ in the literature),” the company writes.

    Also, some sloppy attempts to pass off AI-generated work as human-written can leave tell-tale signs, such as the phrase “as an AI language model,” which means someone copied and pasted ChatGPT output without being careful.


    The original article contains 490 words, the summary contains 148 words. Saved 70%. I’m a bot and I’m open source!