An update to Google’s privacy policy suggests that the entire public internet is fair game for it’s AI projects.

  • renrenPDX@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Why is AI scraping not respecting robots.txt? It wasn’t ok early internet days, so why is it ok now? People are complaining about being overloaded by scrapers like it’s the 90’s

      • renrenPDX@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        It’s a plain text file that is hosted on your site that should be visible to the internet. Basically allows/disallows scraping from search engines in your site.

      • sudo@lemmy.fmhy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 year ago

        Here’s an example https://www.google.com/robots.txt

        Basically it’s a file people put in their root directory of their domain to tell automated web crawlers what sections of the website and what kind of web crawlers are allowed to access their resources.

        It isn’t a legally binding thing, more of a courtesy. Some sites may block traffic if they’re detecting the prohibited actions, so it gives your crawlers an idea of what’s okay in order to not get blocked.