Web Crawler Security Tool
A web crawler oriented to infosec.
The Web Crawler Security Tool is a python based tool to automatically crawl a web site. It is a web crawler oriented to help in penetration testing tasks. The main task of this tool is to search and list all the links (pages and files) in a web site.
The crawler has been completely rewritten in v1.0 bringing a lot of improvements: improved the data visualization, interactive option to download files, increased speed in crawling, exports list of found files into a separated file (useful to crawl a site once, then download files and analyse them with FOCA), generate an output log in Common Log Format (CLF), manage basic authentication and more!
Many of the old features has been reimplemented and the most interesting one is the capability of the crawler to search for directory indexing.
Some features:
- Crawl http and https web sites (even web sites not using common ports).
- It allows to determine the depth of the crawling (-C option)
- Generates a summary at the end of the crawling with statistics about the crawl results
- Implemented HEAD method for analysing file types before crawling. This feature improves the speed of the crawler significantly.
- Uses regular expressions to find 'href', 'src' and 'content' links.
- Identifies relative links.
- Identifies non-html files and shows them.
- Not crawl non-html files.
- Identifies directory indexing.
- Crawl directories with indexing (not yet implemented in v1.0).
- and many more...