_       _      _   
      (_)     | |    | |  
__   ___  ___ | | ___| |_  
\ \ / / |/ _ \| |/ _ \ __| 
 \ V /| | (_) | |  __/ |_  
  \_/ |_|\___/|_|\___|\__|  
    
-- foundation --

we archive the web

instead of relying on the internet archive or archive.is, we manually scrape web pages off of neocities and nekoweb and archive them.

our mission is to archive the indie web as best as we can and help recover websites when they're lost or deleted or whatever

see our archive
we know this may come as creepy or just stealing art and code. if you're worried, check out our policy and guide

FAQ

what should i do if i dont want my website scraped?

make a new file named "NOCONSENT.txt" in the root and put the following content: "noconsent". every website we go through, we always check for that single file. if we see that "NOCONSENT.txt" doesnt exist in the site, we will scrape your site and archive it here. if you have robots.txt and it disallows everything (or disallows whatever bot you didnt specify) then we will respect it as well.

how will you scrape my site?

our volunteers scrape your site using curl in a command prompt. we then will scrape your site every single month. learn more about curl here.

can i request my website to be archived?

yes of course! you can submit a site link to us here!

i wanna be a volunteer!

awesomee! go through this form here!!!

i want my site outta here!

thats alright. you can email me anytime!

what command do you use for downloading sites

$ts=Get-Date -Format "yyyy-MM-dd"; curl.exe https://example.com -o "C:\Users\rearw\Downloads\$ts.html


operated by auth