• AutoTL;DRB
    link
    English
    11 year ago

    This is the best summary I could come up with:


    Large language models are trained on all kinds of data, most of which it seems was collected without anyone’s knowledge or consent.

    Now you have a choice whether to allow your web content to be used by Google as material to feed its Bard AI and any future models it decides to make.

    It’s as simple as disallowing “User-Agent: Google-Extended” in your site’s robots.txt, the document that tells automated web crawlers what content they’re able to access.

    “We’ve also heard from web publishers that they want greater choice and control over how their content is used for emerging generative AI use cases,” the company’s VP of Trust, Danielle Romain, writes in a blog post, as if this came as a surprise.

    On one hand that is perhaps the best way to present this question, since consent is an important part of this equation and a positive choice to contribute is exactly what Google should be asking for.

    On the other, the fact that Bard and its other models have already been trained on truly enormous amounts of data culled from users without their consent robs this framing of any authenticity.


    The original article contains 381 words, the summary contains 190 words. Saved 50%. I’m a bot and I’m open source!