• @mfat@lemmy.ml
    link
    fedilink
    118 hours ago

    Is this why we are solving motorcycle, stairs, fire hydrant, etc. captchas?

    • NutWrench
      link
      fedilink
      English
      26 hours ago

      So that ‘AI’ car driving software can have an image reference database that relies on people who download porn.

  • apotheotic (she/her)
    link
    fedilink
    English
    1211 hours ago

    I am missing a small amount of context - is reddit randomly prompting users to describe images in posts? Or is it prompting you to describe your own image at upload time?

    Context aside, I definitely think that providing image descriptions is something we should do in spite of the fact that its definitely going to be used to train AI. Choosing to not do so is throwing our blind peers under the bus to reduce the amount of training data for ai fractionally.

    • Robust Mirror
      link
      fedilink
      1210 hours ago

      I haven’t been there in a while but I remember there was a sub of volunteers that were around for years that went around just describing images, way before AI LLM were really a thing.

      I’m assuming this is something new being pushed by reddit itself, but as you said, it’s a good thing regardless.

      • apotheotic (she/her)
        link
        fedilink
        English
        59 hours ago

        As long as, even if reddit is using it to train LLM, they are actually still using the descriptions to add accessibility to those images, which I don’t take for granted

  • circuitfarmer
    link
    fedilink
    30
    edit-2
    20 hours ago

    At this point, any request for information could potentially be used as training data. That includes things like captchas.

    I recommend everyone have an extremely literal interpretation of “labor”. Unless you have tremendous insight into where your data is going and how it is being used (and perhaps even then), then assume any ask is ultimately an ask for unpaid labor.

    Obviously you can’t avoid things like captchas, but you can avoid things like this.

    Edit: and it should go without saying, but anything you upload to socials is probably automatic training data at this point. The best approach is simply not to engage with corporate social networks.

    Though Lemmy is not corporately controlled, the information is publically accessible, so even this post is potential training data to be scraped. That is harder to avoid, lest we stop using the internet altogether, but at least avoiding the corpo routes is a good start.

    • @AndrasKrigare@beehaw.org
      link
      fedilink
      37 hours ago

      Bear in mind, with this liberal interpretation, any time you access a website, that is also consuming someone’s labor and if you don’t have a subscription to it, it is unpaid.

    • @flashgnash@lemm.ee
      link
      fedilink
      1211 hours ago

      Captchas have been for training ai for years that’s nothing new. Iirc the reason you do two is one to confirm you’re human, one for training data

    • @flashgnash@lemm.ee
      link
      fedilink
      411 hours ago

      On the flip side, training ai for image recognition has the potential for auto labelling images for the blind

      Could be either the website owners themselves generate them if a human written one isn’t provided, or a browser extension that auto labels any unlabelled images on the screen

      • @Grimy@lemmy.world
        link
        fedilink
        -31 day ago

        I’m sure blind people are happy to have the models that are built with this data, and since both the image and the description are public facing, anyone can use them including open source.

          • @Grimy@lemmy.world
            link
            fedilink
            16 hours ago

            No. That is what data brokers and big AI companies are pushing for but currently it’s considered fair use.

            Anything public facing can be used for ml and it’s been like that for quite a while. It might change based on all the ongoing lawsuit but I doubt it will, it would be economic suicide and China doesn’t care if it’s “theft”.

            It’s better for us, the consumer in any case, since having to pay for data would kill the open source scene and give openai and the other 3 companies a defecto monopoly.