Web Application Enumeration
This is my methodology for when I encounter a web server that's actually hosting content.
Opening Salvo
Starting off, there's a series of activities that I always do, since they're fairly automated.
Directory Bruting
Directory bruteforcing is a method for discovering non-public directories on a webserver based on using wordlists. This can be done manually from the command line/with shell scripting, but it is more common (and often faster) to use pre-made tools.
Manually
This line will enumerate pages from the command line:
cat [wordlist]|while read URL; do enc=$(urlencode $URL) && curl -o /dev/null -s --connect-timeout 1 -w "%{http_code} $URL\n" [target URL]/$enc; done | grep -v 404
Replace [wordlist]
with the wordlist that you're trying. Fill in the URL to the directory at [target URL]
.
FFUF
Fuzz Faster U Fool is a pre-made tool that brutes directories faster than doing it manually.
Basic use is like this:
ffuf -w [wordlist] -u [url]/FUZZ
Recursion can also be specified with -recursion -recursion-depth [depth]
. I wouldn't go much further than 2 on an initial scan, though. Usually I don't start with any recursion at all, but that's a personal preference.
It can also be used to brute for subdomains if you have a suitable wordlist:
ffuf -c -w [wordlist] -u [URL] -H "Host: FUZZ.[domain name]"
In this case [URL]
is like http://example.com
while [domain name]
would simply be the root and TLD, so like FUZZ.example.com
.
Crawling
Crawling or spidering is the process of following all links on a page in order to discover content. It is usually used by search engines to discover content, but can also be used as an enumeration tactic.
In Kali, we can use a simple tool called GoSpider to crawl a site:
gospider -s "[URL]"
I'm sure it has more opions, but I rarely use this tool to any great effect, so I don't know them right off hand.
Manual Enumeration
If I haven't found an obvious vulnerability or indication that I should be doing web application framework enumeration (e.g. enumerating WordPress, Joomla, etc), then I'll move on to manual enumeration.
Burp
BurpSuite is a webproxy that can be used to intercept web requests. It comes installed and configured in Kali by default, but if you want to use your own browser instead of Burp's built-in, or if you want to use it from a different operating system, you'll need to manually install it.
Install
- Burp Community (the free version) can be downloaded from PortSwigger
- Run the shell script (linux) that comes down. Go through the installer.
- Install the FoxyProxy Browser Plugin
- Configure a proxy pointing to 127.0.0.1:8080 in FoxyProxy
- Follow PortSwigger's Instructions on installing Burp's CA certificate. Without this, you'll get security warnings whenever you use Burp Proxy.
Burp should be ready to use with your desired browser. Again, this is unnecessary if you are comfortable using Burp's internal Chromium browser.
Using Proxy
- Launch Burp
- If you're using your own browser, go ahead and enable your Burp proxy setup in FoxyProxy
- Along the top in Burp, you'll see
Dashboard
,Target
, etc. SelectProxy
. - To turn the proxy on, simply click the button that says
Intercept is Off
. This will turn proxy interception on. - To turn interception back off, simply click the same button, which will indicate
Intercept is On
. - If you intend to use the internal Burp browser, simply click the
Open Browser
button, and Burp's internal Chromium install will open.
Once interception is enabled, Burp will intercept all traffic to and from the host, pausin communication at every outbound request. You'll be presented with the option to either Forward
or Drop
requests as they come.
Request content can be analyzed from here, either to be tampered with in Burp, or else in something like CURL if automation (or just a CLI) is desired.
Wappalyzer
Wappalyzer is a browser plugin that can help identify web stack technologies. Install it and play with it. It can be useful for basic passive reconaissance as you walk through a site manually. Not much really more to it.
Developer's Console
The developer's console is a powerful tool found in most modern browsers that allows the user to view the underlying source code of pages, as well as network statistics, etc.
In Firefox, this is accessed by right-clicking and selecting Inspect
.
Inspector
The inspector can be used to view the HTML/CSS of a page. For social engineering, it can also be used to modify the code on the page locally.
For discovery, this tab is useful for discovering hidden form inputs, and possibly linked assets such as JavaScript, and really anything else that might be in the raw source code.
Debugger
The debugger allows us to view any client-side code (like JavaScript) that may be running on the site. Often times this is minified (obsfucated in such a way to make the code take up less bandwidth on transfer). This can be undone using the {}
symbol at the bottom left of the main view pane or by using an online tool like Beautifier.io. Obviously, leaking code that you're deobfuscating to an external service like beautifier.io could be a potential OPSEC violation, though, if that's a consideration.
This tab can be a decent workhorse for debugging and doing source code analysis against a site's assets.
Network
The network tab can be useful for seeing the latency between you and the site, load times for individual pages and files, etc. Using the throttling feature, it can also be used to simulate slower network conditions.
I cannot think of much of an enumeration use for this page, but it is useful for certain types of development testing, especially for testing the performance of sites in development, and for
CURL
CURL is an HTTP implementation that can be used from the command line. It's also available as a library in many popular programming and scripting languages.
CURL is a very powerful tool for manually crafting HTTP requests, and should definitely be learned. For now, just go over the documentation; I plan to write a more extensive article on CURL in the future.