The program aims to help you collect subdomains of a list of given second-level domains (SLD).
-
Option 1: Download from GitHub Releases directly (Recommended)
-
Option 2: Go Install
$ go install github.com/WangYihang/Subdomain-Crawler/cmd/subdomain-crawler@latest
- Edit input file
input.txt
$ head input.txt
tsinghua.edu.cn
pku.edu.cn
fudan.edu.cn
sjtu.edu.cn
zju.edu.cn
- Run the program
$ subdomain-crawler --help
Usage:
subdomain-crawler [OPTIONS]
Application Options:
-i, --input-file= The input file (default: input.txt)
-o, --output-folder= The output folder (default: output)
-t, --timeout= Timeout of each HTTP request (in seconds) (default: 4)
-n, --num-workers= Number of workers (default: 32)
-d, --debug Enable debug mode
-v, --version Version
Help Options:
-h, --help Show this help message
$ subdomain-crawler
- Check out the result in
output/
folder.
$ head output/*