Coverage of web application scanners

My buddy rsnake over at Ha.ckers.org posted a report from Larry Suto about tests he performed on web application scanners and comparing how well they cover a web applications code base.

The report is intesting on many fronts, one of which is the fact that the tool I help build at NT OBJECTives came out on top, but also because its the first type of review thats looking at a statistic that really compares scanners in a quantifiable way.

Some comment on the site from users of the other products or from the vendors themselves have made the claim that web scanners are not designed to be “point and shoot” as they say, and that a human should be training the scanner to each web app. I think they are doing users a disservice to work from that assumption.

A scanner should do as much as it can on its own, and let humans do their own pen testing, and/or help point pen testers to areas of interest. If your a organization with hundreds or thousands of web apps that need testing, do you really have the man power to teach your “automated web scanner” how to test each of those apps?

Do you really have time to spend clinking on every link, and filling out every form on a website with some 3000+ pages, or do you want the scanner that does the best job of doing all of this for you?