Recon is important, but some people hate it. I get it.
When you're in the zone and ready to pounce on a target, you just want to start hacking.
Want the best of both worlds? Quick/complete recon? Without sacrificing coverage?
As an offensive security and testing connoisseur, I love recon. But after talking with many other hackers about their flow, It’s always divided. Others absolutely do not enjoy it at all and are way more comfortable getting on a target as fast as possible.
So, for those of you in the second camp, what do I recommend for you to get the benefit of great recon without all the headaches? What is considered great coverage?
I like to call it “recon++” and it is a package of subdomain finding and associated recon. Any setup that does all these is really good:
Github source code scraping
DNS records analysis
De-duplication and Live-Only host filtering (web probing)
Introductory content discovery
This gives you a complete picture of an orgs subdomains and starts cursory analysis of them.
Doing this individually is time-consuming. So for the low-low cost of the initial setup time, you can get GREAT recon++ and you don’t have to be a recon-head to get it!
ReconFTW automates the entire process of reconnaissance for you. It performs “recon++” and can be used to run some cursory automated vulnerability checks like XSS, Open Redirects, SSRF, CRLF, LFI, SQLi, SSL tests, SSTI, DNS zone transfers, and more.
Here’s a view of the full scope of what it can do and what it uses:
Once you have ReconFTW tuned how you want it, you just kick it off on any project, and when it’s done you review the output of subdomains and scan alerts.
What does it do? / How long does it take?
./reconftw.sh -d target.com -s
The above is for just subdomain enumeration (using multiple tools), screenshotting, buckets and zone transfers,
It can provide you with all this in 10-60 minutes depending on target size.
./reconftw.sh -d target.com -r
This is “recon++” which adds port scanning, content discovery, and nuclei scanning.
This can take a “few” hours depending on the target size.
./reconftw.sh -d target.com -a --deep
This is everything. Deep mode adds command line spidering, and scanning with all the tools.
This can take overnight or several days depending on on target size.
Here’s What it looks like:
So, how do you get the MOST out of ReconFTW?
(you want to, believe me)
Well like many frameworks, ReconFTW glues together a bunch of disparate tools like Amass, subfinder, nmap, etc, etc.
To get the best output, you need to ensure you have API keys for services those tools pull data from.
An advanced user of ReconFTW is the master of the config files for the framework and its underlying tools. The ReconFTW config file is also where you can tune, globally, what kind of enumeration and scanning the tool does. Everything from how fast it scans, proxy settings, which vuln checks, which wordlists, which ports, timeout options, etc.
1. Set up one of the underlying tools, Amass, with all the free and paid API keys you have: Amass API keys for services are defined in:
Hahwul has an excellent blog on acquiring API keys for Amass:
2) Make sure you have at least 5 GitHub tokens defined in a file called:
This allows your scraping of GitHub to be far more likely to return the proper results.
3) If you want to extend ReconFTW to alert you in slack or have some PAID API keys for intel platforms, you can define some explicitly in the ReconFTW config file:
Here’s MY config file (without API Keys)
It removes some OSINT/Googling, and some js analysis, and some other minor tweaks. It allows for fast recon++
OK, so for the one-time setup cost you get GREAT recon++ with little hassle, let’s review:
Register for APIs and services
Copy my config file
Add API keys to Amass and ReconFTW
ReconFTW and Amass pro Tips:
Now you're done setting up Amass and ReconFTW, what if another tool comes out that is novel in finding subdomains? Check out my friend Patrik's talk on integrating new tools with the Amass scripting engine. This talk is FIRE:
Dealing with hanging scans
ReconFTW is a sophisticated wrapper around many industry standard tools. Each phase of ReconFTW is running one of those tools. If for any reason you want to skip that phase, you can spin up
and find reconftw.sh and under it the tool it’s running and under that the threads of the tool.
Simply F9 (kill) the TOOL under reconftw.sh to skip it.
This is useful if that phase has stalled or is taking too long, or you forgot to remove a phase from the config file you don’t care about when hacking. The great thing about ReconFTW is it saves scan progress in flat files. So restarting it is not painful.
Configuring Nuclei Checks
When doing -r or above ReconFTW adds nuclei scanning. The informative and low checks from nuclei can be quite “busy” so you can tune which checks you might want to exclude in the ReconFTW config file.
Open the config file and search for “NUCLEI_FLAGS=”
Here you can specify which template you want to exclude adding the -eid flag. An edit like this would look like:
NUCLEI_FLAGS=" -silent -t $HOME/nuclei-templates/ -retries 2 -eid addeventlistener-detect,tech-detect,ssl-issuer,ssl-dns-names”
Sometimes I just set my Nuclei config file to:
and omit info and low altogether.
ReconFTW for Blue Teamers
Another great way to use ReconFTW is from a Blue Team perspective. You can use it to build an external asset inventory / attack surface program for your org.
It’s way cheaper and more effective than 80% of the paid products out there. Recon tools from the hacker scene are in constant development and many asset management / attack surface management companies do not keep up. Save your org 20-400k and just track it all in a spreadsheet lol.
What are the tools/framework NOT going to do for you?
When doing vulnerability checking type scans with -a or/and —deep most of the tools for vulnerability checking are command line tools, checking URLs for config issues (nuclei) and fuzzing the parameters located in each scans “gf files”.
The gf files are only what URLs and parameters command line scanners have found from historical sources (wayback machine ++). gf is parsing out “interesting” parameter names from that output.
So off the bat the tools are not fuzzing everything, and are not fuzzing REST based endpoints. If you add —-deep , gospider will do some command line spidering, but that has its limitations too (heavy js and other tech). Also, these options make scans run days, rather than hours.
Here’s my take on it:
I value ReconFTW for its subdomain discovery and associated recon. I treat everything else as a happy bonus. I don’t expect the vuln tools to give me much of anything.
Any “on app” testing should be done manually with Burp. Most times I even end up re-doing content discovery again to tweak filters and rules for the specific sites. TBH, these caveats apply ANY framework that glues together tools like this. There are several which are paid products now.
More on Nuclei as part of ReconFTW
The fastest way to find the template ID for a template you want to exclude (because it doesn’t always match the command line output) is to take the template from the command line you want to exclude and run:
nuclei -tl | grep ssl-dns-names
This should return the path to the yaml template, which inside had the ID.
Some templates, though, are GIANT collections.
Take “tech-detect” for example. It houses many tech detection rules, but what if I only want to exclude one of them in that multi-process template?
My command line says it’s alerting on:
[tech-detect:cloudflare] [http] [info]
I don't care about knowing that, and it is clogging my scan up.
I would first have to find the YAML file for “tech-detect” with the above command and exclude ALL its detection by excluding its ID. The only option to individually exclude the Cloudflare detect part would be to actually edit the YAML file and remove that one check.
Bonus: catch an interview with @six2dez1 here about his experience and creating the tool.
(I enabled Closed Captioning and Google Translate to listen)