Robots.txt is a file that gives search engine crawlers a polite hint on which pages shouldn’t be crawled. It’s not legally binding (I’m not a lawyer). It used to be beneficial for both webmasters and search engine crawlers — Google used to actually take down sites by accident by sending them too much traffic. (Obviously, not a concern anymore).
How can sites tell LLMs what data shouldn’t be included in a training corpus? But are the incentives there for both data creators and consumers?
* Avo
About two weeks ago
https://www.heise.de/news/Google-aendert-Nutzungsbedingungen-Alles-darf-fuer-KI-Training-genutzt-werden-9207556.html
These companies are desperate to win and maintain their monopoly in AI Race