reddit hackernews mail facebook facebook linkedin
creepyCrawler

creepyCrawler

Crawl a site and extract useful informations for recon.

Provide a starting URL and automatically gather URLs to crawl via hrefs, robots.txt, and sitemap.

Extract useful recon info:
- Emails
- Social media links
- Subdomains
- Files
- A list of crawled site links
- HTML comments
- IP addresses
- Marketing tags (UA,GTM, etc.)

+ 'Interesting' findings such as frame ancestors content and resources that return JSON content
+ Built-in FireProx to automatically create endpoints for each subdomain, rotate source IP, and cleanup at the end
+ HTTP/SOCKS proxy support