Sorry to say, archive.org is under a ddos attack. The data is not affected, but most services are unavailable.
We are working on it & will post updates in comments.
Brought to you by the Department of Erasing History.
I doubt this has to do with “powerful people”. A DDOS attack does not remove anything from the net, but only makes it temporarily hard to reach.
There are firms that specialize in suppressing information on the net. They use SEO tricks to get sites down-ranked, as well as (potentially fraudulent) copyright and GDPR request.
There must be any number of “little guys” who hate the Internet Archive. They scrape copyrighted stuff and personal data “without consent” and even disregard robots.txt. Lemmy is full of people who think that people should go to jail for that sort of thing.
How does taking the website down for a few hours help those people? Especially a state actor? If it was the US government or someone like them wouldn’t they do something more permanent? Actually wipe the website?
Some news source released something that got redacted based on government pressure. Archive made a snapshot of the news source. Now the state actor goes after the Archive to prevent time sensitive information from spreading. They benefit from the information not being widely available immediately.
Oh that’s true. I’ve seen a lot of cancel/call-out documents archived on IA, some of which were directed at children or had false accusations on them. It would be funny but not that surprising if all of this was over obscure Twitter drama.
TBH I can understand that it’s a problem for people who aren’t expecting it. If they disregard instructions not to index things then that’s also a problem. The only real way to prevent scrapers from replicating content is to place it behind a registration wall.
I doubt this has to do with “powerful people”. A DDOS attack does not remove anything from the net, but only makes it temporarily hard to reach.
There are firms that specialize in suppressing information on the net. They use SEO tricks to get sites down-ranked, as well as (potentially fraudulent) copyright and GDPR request.
There must be any number of “little guys” who hate the Internet Archive. They scrape copyrighted stuff and personal data “without consent” and even disregard robots.txt. Lemmy is full of people who think that people should go to jail for that sort of thing.
Lots of grand conspiracy theories in this thread when, in the end, it’s probably some bored script kiddy
I doubt it. I’d sooner think it’s a corporation or state actor.
How does taking the website down for a few hours help those people? Especially a state actor? If it was the US government or someone like them wouldn’t they do something more permanent? Actually wipe the website?
Some news source released something that got redacted based on government pressure. Archive made a snapshot of the news source. Now the state actor goes after the Archive to prevent time sensitive information from spreading. They benefit from the information not being widely available immediately.
Oh…so what got released today?
How would I know? The news source retracted their statement and archive.org is down…
Up for me. It was down a few hours tops. And I remember checking it around the time you made that post as well?
Israel attacking Palestine again possibly
What does knocking the website offline for a few hours do for their war?
OwN tEh LiBs !!!
Who knows, how in The world would I know
err…it was your suggestion?
I offered a plausible explanation never did I suggest it was THE reason. Near zero chance we will ever know
Aliens or illuminati, for sure.
Is it still something you can do to big sites the way people did back in the 2000’s?
Yep but usually the worst case scenario is a few hours of downtime.
Oh that’s true. I’ve seen a lot of cancel/call-out documents archived on IA, some of which were directed at children or had false accusations on them. It would be funny but not that surprising if all of this was over obscure Twitter drama.
That’s one of the problems with archiving everything. I lean in favor of the IA, but there are still issues.
Can you elaborate on the last part?
TBH I can understand that it’s a problem for people who aren’t expecting it. If they disregard instructions not to index things then that’s also a problem. The only real way to prevent scrapers from replicating content is to place it behind a registration wall.
https://blog.archive.org/2017/04/17/robots-txt-meant-for-search-engines-dont-work-well-for-web-archives/
Does this answer the question?