Tag Archives: Tales from the Web Scanning Front

Tales from the Web Scanning Front: Blacklisting

The smell of melting Blackberries/iPhones/Droids. You have probably smelled it before. You began testing an application and forgot to blacklist the “Contact Us” page so everyone who receives an email for “Contact Us” gets pummelled with emails during the test.

We often remind our customers about this kind of logistical trouble, but we still manage to get the frantic breathless panicky phone call when recipients of the “Contact Us Page begin receiving 1000 emails within 10 minutes.

So what do you do to prevent this from happening? It’s actually very simple.

First, a wee bit of background on web scanners. Because all applications are different (different page names, different parameter names, vulnerable in different spots to different attacks, etc.). Web scanners have to crawl the targeted websites and then attack every page and parameter with hundreds of attacks. Unless told otherwise, every single page will be crawled and every parameter attacked.

Think about it, this includes the following kinds of pages:

  • E-Mail the sales team
  • E-Mail tech support
  • Wire the money
  • Delete this blog
  • Delete this item
  • Reset the admin password

Fortunately, all modern scanners have blacklisting technology. Blacklists in this context simply tell the scanner not to crawl and/or attack that page.

During your planning period or before you execute any application test, carefully consider the pages on your site that you don’t want to be crawled by the scanner dozens of times. Then, simply add the URL’s for those pages to the blacklist in your scanner. It’s that easy.

Whether you outsource your scanning, use software in-house or use a SaaS service, you will have many fewer people screaming at you if you take some time to blacklist the pages and prevent the unexpected deluge in your co-workers inbox.

Spending two minutes to properly configure your scanner will help avoid potential problems and keep the office free from the smell of burnt plastic.

 

Tales from the web scanning front: Don’t eat the entire buffet at once

One of the more common problems that we see is customers trying to bite off more of their application infrastructure at once than they can chew.  A certain amount of planning will yield better, more digestible results with substantially less indigestion.

Dropping all of acme.com into your web scanner when there are 100 applications with 50,000 pages across 60 subdomains is likely not an optimal strategy.  Here are some considerations:

  • Scan time:  Assuming reasonable connectivity and application server horsepower, a scan of a medium-sized application can take 3- 12 hours.  Scanning 60 applications at once will take a week or more before the scan completes and you can start working on the results.
  • Information Segmentation:  Most enterprises will have more than one development team.  It’s not the best policy to ship detailed information about all of your vulnerabilities to people who don’t need to know it.  Also, it’s much easier to have one report per application that you can just send to the team coding it so that they can fix just the vulnerabilities listed in the report.
  • Report Size:  A scan that large will create a report that will be immense if you have any significant number of findings.  Even if your vendor segments and paginates the report, it is going to be harder to navigate than a series of smaller reports.
  • Re-Scanning: Once the developers start remediating vulnerabilities, you will be asked to re-scan to give a clean bill of health for each application.  You don’t want to have to wait the week or more an enterprise scan takes to update the development team.

The one downside to all of this is that you will have to kick off and monitor more scans.  If you have a large number of applications and this is likely to be a logistical headache, you should consider an enterprise portal to schedule and monitor scans and deliver scan results (full disclosure, we offer such a tool).

As in most endeavors, a bit of planning goes a long way in making life easier.  Giving some thought to breaking up your application scanning will make your application scanning program a lot easier and more effective.

Tales from the Web Scanning Front: Why is This Scan Taking So Long?

As CEO, I’m constantly emphasizing the importance of customer support and trying to attend several support calls each week to stay on top of our support quality and what customers are asking.

Surprisingly, application scan times are one of the most common issues raised by customers.  Occasionally, scans will take days or even weeks.

At this point, I would say that in almost all cases, there is an issue that lies within the application’s environment as opposed to a something within the software.

First some background on web application security scanners. Web scanners first crawl websites, enumerate attack points and then create custom attacks based on the site.  So, for example, if I have a small site with 200 attackable inputs and each one can be attacked 200 ways, with each attack requiring 2 requests, I have 200*200*2 or 80,000 requests to assess that site.

Now NTOSpider can be configured to use up to 64 simultaneous requests so depending on the response time from the server, you can run though requests very quickly.  Assuming, for example, 10 requests a second, that’s 600 per minute, 36,000 per hour and you can get through that site in 2.22 hours.

The problem is that quite often the target site is not able to handle 10 or even 1 request per second.  Some reasons can include:

  • Still in development – The site is in development and has limited processing power and/or memory.
  • Suboptimal optimization – The site is not built to handle a high level of traffic and this has not yet shown up in QA.  We were on the phone with a customer last month who allowed us to look at the server logs and we saw that one process involved in one of our requests was chewing up 100% of the CPU for 5 seconds.  Another application was re-adding every item to the database each time the shopping cart was updated (as opposed to just the changes) and our 5,000 item cart was severely stressing the database.
  • Middleware  Not to bash any particular vendor (Coldfusion) but some middleware is quite slow.

So let’s look at our 80,000 request example from above and assume that our site can only handle 1 request per second.  Our 2.2 hour scan time balloons to 22 hours.  For our 5 second response in bullet 2, we get to 4.6 days for our little site.  The good news is that NTOSpider can be configured to slow itself down so as to not DOS the site (this is our Auto-Throttle feature).  The bad news is that it will take some time.

So what’s a poor tester to do?

  • Beefier hardware  If you are budgeting for a web scanner,  consider spending a couple of extra thousand dollars on some decent hardware to test your apps. (Note – a modern laptop with optimal ram for the OS you are running – 32-bit OS = 4 Gigs of ram / 64-Bit OS = 8 Gigs of ram – will solve 90% of all performance issues.)
  • Scheduling  In some cases, you can schedule scans so that even if they are longer, you can still get things done in time.
  • Segmenting  In some cases, if you know that only a portion of the site has changed, you can target the scan to test only that subset and dramatically reduce scan time.
  • Code Augmentation  Not to put too fine a point on it, but if a single request is taking 5 seconds to process, a hacker can DOS your site by hand.  You might want the developers to look at adjusting the code.