• @gayhitler420@lemm.ee
    link
    fedilink
    6110 months ago

    robots.txt isn’t a basic social contract, it’s a file intended to save web crawlers precious resources.

    • umbraroze
      link
      fedilink
      5510 months ago

      Yup. The robots.txt file is not only meant to block robots from accessing the site, it’s also meant to block bots from accessing resources that are not interesting for human readers, even indirectly.

      For example, MediaWiki installations are pretty clever in that by default, /w/ is blocked and /wiki/ is encouraged. Because nobody wants technical pages and wiki histories in search results, they only want the current versions of the pages.

      Fun tidbit: in the late 1990s, there was a real epidemic of spammers scraping the web pages for email addresses. Some people developed wpoison.cgi, a script whose sole purpose was to generate garbage web pages with bogus email addresses. Real search engines ignored these, thanks to robots.txt. Guess what the spam bots did?

      Do the AI bros really want to go there? Are they asking for model collapse?

      • @gayhitler420@lemm.ee
        link
        fedilink
        2510 months ago

        Of course they want the model collapse. Literally no American tech company has been about reliably, sustainably supplying a good or service or stewarding some public good.

        They’re doing the vc -> juice stock -> gut resources cycle. Nobody cares about the model.

      • Gamma
        link
        fedilink
        English
        1910 months ago

        Considering Reddit has decided to start selling user content for training, yeah I guess they want their models to collapse. There’s so much bot generated content nowadays