1. Support Center
  2. Documentation
  3. Desktop editions
  4. Scanning web sites
  5. Crawl options

Crawl options

Numerous options are available to configure the behavior of Burp Scanner during crawl-based scans. These can be configured on-the-fly when launching a scan, or can be maintained in Burp's configuration library.

Crawl optimization

These settings control the behavior of the crawl logic to reflect the objectives of the crawl and the nature of the application.

The maximum link depth represents the maximum number of navigational transitions (clicking links and submitting forms) that the crawler will make from the start URL(s). Modern applications tend to build a mass of navigation into every response, in locations like menus and the page footer. For this reason, it is normally possible to reach the vast majority of an application's content and functionality within a small number of hops from the start URL. Fully covering multi-stage processes (like viewing an item, adding it to a shopping cart, and checking out) will require more hops.

Some applications contain extremely long navigational sequences that don't lead to interestingly different functionality. For example, a shopping application might have a huge number of product categories, sub-categories, and view filters. To a crawler, this can appear as a very deep nested tree of links, all returning different content. In this situation, there are clearly diminishing returns to crawling deeply into the navigational structure, so it is sensible to limit the maximum link depth to a small number, such as 8.

Crawl strategy

Real-world applications differ hugely in the way they organize content and navigation, the volatility of their responses, and the extent and complexity of application state. At one extreme, an application might employ a unique and stable URL for each distinct function, return deterministic content in each response, and contain no server-side state. At the other extreme, an application might employ ephemeral URLs that change each time a function is accessed, overloaded URLs that reach different functions through different navigational paths, volatile content that changes non-deterministically, and heavily stateful functions where user actions cause changes in the content and behavior that is subsequently observed.

Burp's crawler can handle both of these extremes. Where required, it can handle ephemeral and overloaded URLs, volatile content, and changes in application state. However, fully handling these cases imposes a material overhead in the quantity of work that is involved in the crawl. You can use the crawl strategy setting to tune the approach that is taken to specific applications. In practice, this setting represents a trade-off between the speed of the crawl and the completeness of coverage that is achieved. The default strategy represents a trade-off between speed and coverage that is appropriate for typical applications. You can select a strategy that is more optimized for speed, when crawling an application with more stable and unique URLs, and no stateful functionality. Or you can select a strategy that is more optimized for completeness, when crawling an application with more volatile or overloaded URLs, or more complex stateful functionality.

Crawl limits

Crawling modern applications is sometimes an open-ended exercise, due to the amount of stateful functionality, volatile content, and unbounded navigation. Burp's crawler uses various techniques to maximize discovery of unique content early in the crawl. The settings for crawl limits let you impose a limit on the extent of the crawl, as it reaches the point of diminishing returns. It is generally sensible to configure a limit to the extent of the crawl, based on your knowledge of the application being scanned.

You can choose to limit the crawl based on:

Login functions

These settings control how the crawler will interact with any login functionality that is encountered during the crawl. You can configure whether the crawler should:

How does the crawler identify login and registration forms?

The crawler uses the following checklist to identify login and registration forms on the target site:

If all of these criteria are met, the crawler then distinguishes registration forms from login forms by applying the following rules in order. For example, if two forms have an equal number of password fields, it will then compare the number of text fields, and so on.

The registration form is whichever form has the most:

  1. Password fields
  2. Text fields
  3. Multi-value select fields
  4. Single-value select fields
  5. If all of the above are equal, whichever form was found first is assumed to be the registration form.

Why is the crawler not filling my login forms?

The crawler identifies login and registration forms based on the password field. However, it will only be able to enter a username or email address if the related fields:

If either of these conditions is not met, the crawler will successfully identify the form but will be unable to enter the corresponding data correctly.

Handling application errors during crawl

These settings control how Burp Scanner handles application errors (connection failures and transmission timeouts) that arise during the crawl phase of the scan.

You can configure the following options:

You can leave any setting blank to disable it.

Miscellaneous crawl settings

These settings let you customize some details of the crawl:

Embedded browser options

These settings allow you to control whether Burp Scanner uses the embedded Chromium browser for navigation during both the crawl and audit phases of a scan. By default, this experimental feature is disabled. However, you can choose to either manually enable it or allow Burp to make the decision for you based on its assessment of your machine. We recommend using a machine with at least 2 CPU cores and 8 GB RAM.

You can also control whether the embedded browser loads resources from hosts that are not in-scope, and set a read timeout for site resources.