Tools for downloading and preserving wikis. We archive wikis, from Wikipedia to tiniest wikis. As of 2023, WikiTeam has preserved more than 350,000 wikis. - WikiTeam/wikiteam
Lookyloo is a web interface that allows users to capture a website page and then display a tree of domains that call each other. - GitHub - Lookyloo/lookyloo: Lookyloo is a web interface that allow...
aria2 is a lightweight multi-protocol & multi-source, cross platform download utility operated in command-line. It supports HTTP/HTTPS, FTP, SFTP, BitTorrent and Metalink. - GitHub - aria2/aria...
A self-hosted toolkit for archiving webpages to the Internet Archive, archive.today, IPFS, and local file systems - wabarc/wayback: A self-hosted toolkit for archiving webpages to the Internet Arch...
Browser extension for viewing archived and cached versions of web pages, available for Chrome, Edge and Safari - GitHub - dessant/web-archives: Browser extension for viewing archived and cached ver...
Web Extension and CLI tool for saving a faithful copy of an entire web page in a single HTML file - GitHub - gildas-lormeau/SingleFile: Web Extension and CLI tool for saving a faithful copy of an e...
HTTrack is a free (GPL, libre/free software) and easy-to-use offline browser utility. It allows you to download a World Wide Web site from the Internet to a local directory, building recursively all directories, getting HTML, images, and other files from the server to your computer. HTTrack arranges the original site's relative link-structure. Simply open a page of the 'mirrored' website in your browser, and you can browse the site from link to link, as if you were viewing it online. HTTrack can also update an existing mirrored site, and resume interrupted downloads. HTTrack is fully configurable, and has an integrated help system. WinHTTrack is the Windows 2000/XP/Vista/Seven/8 release of HTTrack, and WebHTTrack the Linux/Unix/BSD release.
The archivist's web crawler: WARC output, dashboard for all crawls, dynamic ignore patterns - GitHub - ArchiveTeam/grab-site: The archivist's web crawler: WARC output, dashboard for...
ScrapBook X – a legacy Firefox add-on that captures web pages to local device for future retrieval, organization, annotation, and edit. - GitHub - danny0838/firefox-scrapbook: ScrapBook X – a legac...
Cyotek WebCopy is a free tool for automatically downloading the content of a website onto your local device.
WebCopy will scan the specified website and download its content. Links to resources such as style-sheets, images, and other pages in the website will automatically be remapped to match the local path. Using its extensive configuration you can define which parts of a website will be copied and how, for example you could make a complete copy of a static website for offline browsing, or download all images or other resources.