Hey guys, as you already know about the different web crawler tools used to crawl documents available on the web application and today we are gonna use a tool called waybackurls , which works in same as other web crawlers. Basically the tool accept line-delimited domains on stdin, fetch known URLs from the Wayback Machine for *.domain and output them on stdout.
Let's get started
Installation
To use this tool, it is needed to install the Go utility in your machine, otherwise you cannot use it. Let’s install it easily using the following command.
apt install golang
Installing Waybackurl Tool:
Once the installation is done, we can download this tool through the Go and also use it from anywhere.
- go get github.com/tomnomnom/waybackurls
- waybackurls -h
Done. Everything looks well.Lets use our tool.Only we need a target url in the command that we want to crawl and that’s it. It will gonna automatically crawl all the URLs and documents of the web application.
Usage:
waybackurls testphp.vulnweb.com
Exclude Subdomain
By default it automatically fetches all subdomains of a given domain and starts scan on them as well but if you only want to crawl specific given domains then you can mention -no-subs after gives the URL.
Usage :
waybackurls testphp.vulnweb.com -no-subs
How to save output?
There is no command given in this tool to save the output but if you want to save your output in txt file then you can use the following command.
waybackurls <URL> > <output file name>
waybackurls testphp.vulnweb.com > results.txt