Tag Archives: Web App Sec

iphone image

Mobile Application Security 101

Mobile Applications – Still Insecure

Businesses are racing to meet the demands for mobile applications, yet mobile application security is an afterthought, just as web application security was when web applications started to proliferate.

As an industry, we know so much about securing web applications that applies to mobile, but most organizations are still repeating past mistakes and making new mobile specific mistakes that expose businesses to security incidents.

According to a recent Gartner report, “Most enterprises are inexperienced in mobile application security.  Security testing, if conducted at all, is often done casually — not rigorously — by developers who are mostly concerned with the functionality of applications, not their security.[1]” In this same report, the firm indicates that “through 2015, more than 75% of mobile applications will fail basic security tests.[2]

Friends-using-Foursquare-006

Don’t Forget Mobile Web Services

There has been so much talk about mobile device and mobile client security, but the key thing to keep in mind when approaching mobile application security is that it’s critical to test both the client as well as the communication to the web service that powers it. For example, if you’re using your Twitter app, the primary logic that resides on the mobile client is display and user authentication. The app must then communicate to a web service in order to get and send Tweets. This web service is the real power of Twitter and where the real security risk lies. Why attack one user, when you can attack that web service that is used by millions?

Even though mobile applications leverage a client-server model, they are built with entirely new technologies that necessitate new processes, technologies and skills.  While mobile application security does drive these new requirements, the overall problem is one that the security industry is already well acquainted with because the vulnerabilities showing up in mobile applications aren’t new at all. We often say that we are “Hacking like it’s 1999” because, the reality is that mobile vulnerabilities are are just the same old vulnerabilities that we have been hunting for over 13 years now: SQL injection, overflow, and client attacks.

These new requirements for mobile testing are driven by the new programming languages used for building mobile clients (Objective-C and Android’s Java variant), the new formats used by back-end web services (JSON and REST) and the new authentication and session management options (OAuth, HMAC, etc). And while those familiar SQL Injection attacks look almost exactly like they did 10 ago, you just can’t find them without understanding how to deliver these attacks within the new structures.

iphone image

SQL Injection Alive and Well

We call the mobile vulns the Where’s Waldo of application security. They’re your old familiar friend, SQL Injection, who looks almost exactly like he did 10 years before – maybe with a few gray hairs – but you just can’t find him as easily because he’s in an all new environment. We simply need to adjust to this new landscape and start looking for our old friend again.

Another important thing to keep in mind about mobile application security testing is that there ARE tools that automate the process. There just aren’t that many of them that automate the entire process or do it very well.

We see several categories of security vulnerabilities in mobile applications:

More on Mobile Application Security

 

[1] [2]Gartner Research Document

Gartner, Technology Overview: Mobile Application Security Testing for BYOD Strategies, By Joseph Feiman and Dionisio Zumerle, August 30, 2013.

Mobile Apps

How to Overcome the Shortfalls of Web Application Security Scanners when Testing Mobile & Rich Internet Applications

Mobile AppsYou’ve built a custom rich internet application that is sure to become your business’ next major revenue stream. Conscious of security, you’ve ensured that the native application authenticates to the server, and you’ve run the app through a web application security scanner to identify weaknesses in the code. Those vulnerabilities have been remediated, and now you’re ready to go live.

Not so fast.

Despite your best intentions, chances are good your rich internet application is going live with dangerous security flaws. Most traditional web application security scanners and authentication methods do not provide the necessary protection when you’re dealing with modern application architectures, data formats and other underlying technologies. However, you can still build state-of-the-art rich internet applications with reliable and safe web application security by following these simple steps.

Step 1: Understand your chosen technology and its security requirements.

Classic HTML applications are no challenge for web application security scanners because that’s what they were originally built to do. However, rich internet applications based on newer technologies like AJAX, JSON and REST are a different story –,most security scanners do not support these new formats unless they’ve been re-architected. Due to the heavy use of JavaScript or complete lack of HTML, these new application formats and technologies make it nearly impossible for scanners to crawl an app. Plus, mobile applications further complicate matters because they often use web services which cannot be crawled at all.

To make matters worse, attackers are finding new ways to exploit application programming interfaces (APIs) associated with mobile applications. Web application session management techniques fail to deliver the protection developers expect, and these old and insecure techniques do not stop attackers from tampering with the application, committing fraud or performing man-in-the-middle attacks.

Therefore, it is important to understand the technologies used in your rich internet applications so you can find an appropriate web application security scanner and/or supplement your scanning efforts accordingly. Below is a list of the technologies that may require a more in-depth security solution:

  • AJAX applications: JSON (JQuery), REST, GWT (Google WebToolkit)
  • Flash remoting: Action Message Format (AMF)
  • HTML5
  • Back end of mobile apps powered by JSON, REST and other custom formats
  • Web services: JSON, REST, XML-RPC, SOAP
  • Complex application workflows: Sequences (shopping cart and other strict processes) and XSRF/CSRF tokens

Step 2: Understand the vulnerabilities of rich internet applications.

There are two key qualities you should require of a web application security scanner that you plan to use for modern rich internet applications. The first is the ability to import proxy logs. The second is an understanding of mobile application traffic, which enables the scanner to create attacks to test for security flaws. Vendors are often quick to advertise their scanners’ ability to be fed data from a proxy, but if the scanner is not familiar with JSON and REST, it will not be able to create attack variations – even when fed recorded traffic.

Like web application security scanners, traditional authentication methods fail to deliver the protection they once promised. While historically used to protect server-side mobile applications from SQL injection and cross-site scripting attacks, today’s authentication methods simply aren’t sophisticated enough to provide adequate web application security to new rich internet applications and mobile apps. For example, attackers can exploit weak passwords when a scheme only authenticates the user and not the application. This can be avoided by using a client-side certificate to identify the application, but this isn’t feasible for all apps – especially customer-facing mobile apps.

Step 3: Determine whether your web application security scanner is capable.

You can – and should – ask your web application security scanner provider what technologies the tool is able to scan. But don’t leave it at that – verify what they say is true. For instance, you can test for the security scanning coverage of an AJAX application by analyzing the request/response traffic. To do so, simply enable the scanner’s detailed logging feature, run the scanner through a proxy like Paros, Burp or WebScarab, and save the logs for manual review.

JSON also poses a unique challenge to web application security scanners. They must be able to decipher the new format and insert attacks to test the security of web application interfaces. A review of detailed logs of request/response traffic will indicate whether the web application security scanner is fully capable of protecting rich internet applications like yours. However, not all web application security scanners provide detailed logging. If this is the case, you will need to set up a proxy to capture traffic during the scan. Begin by scanning only a page that uses JSON, then check to see if the scanner requests include the JSON traffic and requests.

Step 4: Bolster manual testing efforts and custom web application security models.

Attackers are increasingly targeting back-end servers. And while new mobile APIs like JSON create new ways to engage customers in rich internet applications, they also create new ways for attackers to reach back-end servers. The only way to discover and remediate API security flaws, authentication weaknesses, protocol-level bugs and load-processing bugs is with several rounds of testing. Also, understand that you cannot rely on SQL or basic authentication to protect the back end. Develop server-based applications to anticipate attacks by continually verifying the integrity of the application and uptime environment.

Finally, when developing rich mobile applications, keep the following tips in mind:

  • Data provided by the client should never be trusted.
  • A device’s mobile equipment identifier should never be used to authenticate a mobile application, but do use multiple techniques to verify that requests are from the intended user.
  • Because session tokens for mobile apps rarely expire, attackers can use them for a very long time.
  • Credentials should not be stored in the application’s data store, local to the device.
  • When requiring SSL, a valid certificate should be necessary.

Guaranteeing reliable web application security for rich internet applications and mobile apps can be tricky business. However, completing the proper research, choosing the right security scanner, and performing an ample amount of testing will help detect vulnerabilities and ward off new attacks, allowing your application to be successful in the marketplace.

healthcare.gov

An Open Letter to Barack Obama: If You aren’t Sure of Health Exchange Security, Shut it Down Now

Stability in Only the First Issue – Security Will Be Healthcare.gov’s Real Achilles Heel
There has been a significant amount of attention to the the problems of the Obamacare website. While these problems are certainly cause for concern, there are an even more serious group of problems that likely exist and need to be addressed. These have to do with the security of the website and the confidential data that it is collecting on millions of Americans. Given the problems with the site that have already been discovered, if concerns about security cannot be addressed, the site should be shut down until they can be. Slow performance is an inconvenience. The dissemination of confidential information on millions of Americans would be a disaster. Given that a casual test of the home page of the site revealed a security flaw, we are gravely concerned about the security of the site as a whole.

We would emphasize that this is not a hypothetical problem; confidential data is stolen every day by hackers who exploit the security flaws discussed below. If the designers of healthcare.gov have not addressed these issues, the site is vulnerable to user data being stolen and it is almost certain that hackers will exploit this. Unless the Administration is certain that the site can securely protect the confidential user data it is collecting, the site should be shut until that it has that degree of confidence.

The Obamacare Website is a Prime Target for Hackers
It’s obvious this site is a target for hackers. First and foremost, it is set up to collect and aggregate personal, confidential information on millions of Americans. Second, the US government always has enemies and embarrassing the administration would appeal to a large segment of the hacker community. Given the current NSA scandal, anti-American sentiment in the hacker community might be at its all time high. Finally, many hackers are motivated by augmenting their reputation among other hackers. Hacking healthcare.gov would certainly be a prestigious hack.

The Security Flaws in the Site Are Still Largely Unknown
Hacking requires the ability to make thousands of clicks on a site to test for flaws. A single page may require a thousand tests to ensure that it is secure. Healthcare.gov has such poor stability, this is nearly impossible. Once the stability of the site improves, hackers will test it thoroughly. At this point, the true security profile of the site will be made clear.

Healthcare.gov Likely Has Significant Flaws
Given the multitude of problems with the site, it is clear quality testing was lax. It is generally true that functionality testing (i.e. does the site actually work) is is prioritized over security testing. It is likely that the site’s security is even worse than its functionality. We very lightly and casually poked around the first page of the website and found a significant vulnerability that is easy to discover and prevent. It is highly unlikely that this is the only vulnerability on the site. We would also point out that fixing problems on the fly under intense pressure is not an intelligent way to fix enterprise software. Human beings are responsible for preventing security flaws and these are exactly the kind of conditions that lead to security mistakes.

How Website Vulnerabilities Allow Hackers to Steal Confidential User Data
There are two main classes of vulnerabilities that are most concerning. The first of these are called SQL Injection. Web Applications, by design, connect to databases and the databases, by default, give the applications any data that they request. If the applications are not secure, hackers can inject commands to steal or alter all of the data in the database. These vulnerabilities are relatively easy to find and correct. Of course so was the vulnerability we found on the home page, so there is no guarantee that healthcare.gov is free of SQL Injection.

The second class of vulnerabilities of significant concern covers who gets to see what information. There are different types of users of an application and generally, there is a class of user, called an admin or administrator, who has broad access to data. This is necessary because administrators are often called upon to fix problems with the site. Applications control who gets to see what by a variety of means. It is very possible to fool the site into thinking that a non-admin user is an admin, giving a hacker broad access to user data. It is very difficult, expensive and time consuming to test for this class of vulnerability.

Regulatory Compliance
It’s interesting that many private organizations are required to adhere to certain regulatory guidelines like PCI, HIPAA and FISMA, but this application seems to escape them. While this application may not fall under HIPAA guidelines, it does store important personal information like social security numbers. If it was subject to HIPAA (according to this blog by Erik Kangas which simplifies the requirements), it would have failed at least two of the requirements. Based on the security vulnerabilities being discovered and reported it would fail #4 which requires integrity of the data. Requirement #6 states that data can be deleted when needed. From the reports and legal notices we are seeing, it appears that there is NO WAY to delete your data once you provide it.

We just used HIPAA as an example. We could find several failed requirements against PCI as well. So, why is it that a government application that stores social security numbers isn’t subject to regulatory compliance regarding security?

Given The Risk of a Catastrophic Hack, Shut it Down!
We have no information on what kind of security testing has been done on healthcare.gov. But the factors listed above, along with our security tests, give us significant cause for concern. We believe the Obama Administration should be up front with the public as to what security testing was done, by whom and what the results were. If there is not a very high degree of confidence that healthcare.gov is securely protecting the confidential data entrusted to it by the American people, it needs to be shut down until it can be repaired.

Prevent SQL Innjection Using Parametrized queries

Eight Reasons Why SQL Injection Vulnerabilities Still Exist: A Developer’s Perspective

Knowing how to prevent a SQL injection vulnerability is only half the web application security battle. A multitude of factors come into play when it comes to writing secure code, many of which are out of the developers’ direct control. That’s why common vulnerabilities like SQL injection continue to plague today’s applications, and why application security testing software is so important. These problems can be overcome – with a little insight, organizations can begin to address these challenges directly and better enable developers to remediate SQL injection. Here are the top eight reasons SQL injection vulnerabilities are still rampant:

Prevent SQL Innjection Using Parametrized queries

  • SQL itself is vulnerable. SQL is designed to allow people access to information and is therefore inherently vulnerable, so every developer must know how to prevent SQL injection – not just one or two individuals on your development team.

  • The price of agnosticism. SQL is agnostic, meaning it works across database platforms. The upside to this is that it allows code to be database-server agnostic. But it is also the source of the problem. To prevent most vulnerabilities, developers should use parameterized SQL or stored procedures specific to the database server.

  • One mistake is all it takes. If just one vulnerability is left unsecured, a hacker can have his way. Every single input must be protected. Unfortunately, this is a tall order for any development team, as there can be tens of thousands of potential vulnerabilities on a single website.

  • Inexperienced developers lack training on old vulnerabilities. New generations of developers do not always receive the training and mentoring necessary to understand how to prevent common application vulnerabilities. They must be taught how to prevent exposing SQL injection vulnerabilities by creating comprehensive validation logic on every parameter or input.

  • Seasoned developers lack training on new technologies. Many veteran developers are using new formats and technologies to develop new types of applications. They must understand that SQL injection should still be considered for every input. For example, the application inputs from a mobile interface written in JSON that access the backend database can be as vulnerable to SQL injection as any input on an end-user page.

  • It’s not a priority. Many organizations do not consider fixing web application security vulnerabilities to be as important as they should. As a result, developers are generally more concerned with building new features and fixing bugs that impact user functionality.

  • It requires team effort. In order to eradicate SQL injection vulnerabilities, development and web application security teams must collaborate. Developers need security specialists to keep them informed of new hacking techniques, and security teams need developers to eliminate vulnerabilities.

  • Abandoned legacy applications. With the original application developers retired and the source code difficult to locate, vulnerabilities in legacy applications can be difficult or impossible to patch.

As you can see, educating developers on how to prevent SQL injection vulnerabilities won’t completely solve the problem. Organizations must enable developers to build secure code and make web application security testing a priority. Security teams have their perspective as well. Check out this blog to see the Four Reasons Security Teams Can’t Stop SQLInjection.

OWASP logo

OWASP Top 10 List Maturing – Evidenced by Minor Changes

The OWASP Top 10 list is well known as the industry standard for what matters in web security. The list, which ranks the most critical risks organizations face through their web applications, was recently updated. The 2013 Top 10 List features some incremental but noteworthy changes that point to the project’s maturity.

OWASP logo

Perhaps the most significant change to the Top 10 list is the moving of “cross-site scripting” (XSS) from its No. 2 spot to No. 3. This is a big change because web security was built on two things: SQL injection and XSS. Almost every browser these days has some level of XSS prevention, so it’s more difficult to deliver a XSS payload to a user. As a result, XSS represents less of a risk.

By moving to No. 3, XSS switched places with “broken authentication and session management,” which moves up to No. 2 from No. 3. Broken authentication and session management isn’t well understood by most developers, and I suspect that’s why it’s moving up. We see a lot of weak authentication and session management schemes. Developers just aren’t doing a good job with it. But they’re going to have to, because this is a new attack vector that will really matter in the future, especially with the move to mobile.

The OWASP also renamed what was previously “failure to restrict URL access” to “missing function-level access control” and moved it up one spot, from No. 8 to No. 7. This makes sense since it doesn’t matter how access is presented. It is either restricted or it’s not.

“Insecure cryptographic storage” and “insufficient transport layer protection” were also combined and renamed “sensitive data exposure,” which came in at No. 6. This essentially merges the encryption layer, which makes sense. Sensitive data should be transported over a secure channel and stored in a secure manner. You’re dropping the ball if you do one and not the other.

OWASP also added an item to the Top 10 that I think is long overdue. “Using known vulnerable components” deserves its spot at No. 9. Developers have long used shared libraries and open source code, and oftentimes these components have vulnerabilities that affect the software built with them. This has gone on since the days of PHP-Nuke. I’m shocked that it wasn’t on the list before because it seems so obvious, but maybe that’s why it was overlooked.

Finally, the last change I’d like to point out is also one that I don’t necessarily agree with. That’s the drop that “cross-site request forgery” (CSRF) made from No. 5 to No. 8 on the list. We still see a lot of CSRF, partly because automated testing tools just aren’t good at detecting it. It’s becoming increasingly difficult to test for these vulnerabilities and to know when you need CSRF protection. Unfortunately, it still happens a lot and when an attack is successful it can have significant repercussions – especially when combined with XSS.

The changes that we see in the OWASP 2013 Top 10 won’t impact most people because they are incremental, but overall these changes bode well for the list. It’s getting leaner and better. That’s a sign of the project’s maturity.

 

An Information Security Place Podcast – Episode 04 for 2012

Hmmm Lets see if I even remember how to enter this stuff anymore… Yeap you guessed it, we finally recorded another episode – WOOT!
Show Notes:

InfoSec News Update – 


  • Howard Schmidt is Retiring – Link Here
  • Vulnerability Stats of Publicly Traded Companies – Link Here
  • Tool Update – Threadfix from Denim Group – Link Here
  • The Mission Impossible Self-Destructing SATA SSD Drive – Link Here
  • The WAF Wars – Link 1 / Link 2 / Link 3
  • PwnieExpress Releases PwnPlugUI/OS 1.1 – Link Here
  • App for scanning faces to gauge age at bars – Link Here
  • Business Logic Testing defined – Link 1
  • ErrataSec – Wants your hotel PCAP Files – Link 1 / Link 2

Discussion Topic –

  1. Should specific security efforts be validated when the program as a whole is crap? Link Here

Music Notes:

Special Thanks to the guys at RivetHead for use of their tracks –http://www.rivetheadonline.com/

Tour Dates:

  1. June 1 – Dallas – Curtain Club

Intro – RivetHead – The 13th Step”
News Bed – RivetHead - “Beautiful Disaster” 
Discussion Bed – RivetHead - “Difference” 
Outro – RivetHead – “Zero Gravity”

Tales from the web scanning front: Don’t eat the entire buffet at once

One of the more common problems that we see is customers trying to bite off more of their application infrastructure at once than they can chew.  A certain amount of planning will yield better, more digestible results with substantially less indigestion.

Dropping all of acme.com into your web scanner when there are 100 applications with 50,000 pages across 60 subdomains is likely not an optimal strategy.  Here are some considerations:

  • Scan time:  Assuming reasonable connectivity and application server horsepower, a scan of a medium-sized application can take 3- 12 hours.  Scanning 60 applications at once will take a week or more before the scan completes and you can start working on the results.
  • Information Segmentation:  Most enterprises will have more than one development team.  It’s not the best policy to ship detailed information about all of your vulnerabilities to people who don’t need to know it.  Also, it’s much easier to have one report per application that you can just send to the team coding it so that they can fix just the vulnerabilities listed in the report.
  • Report Size:  A scan that large will create a report that will be immense if you have any significant number of findings.  Even if your vendor segments and paginates the report, it is going to be harder to navigate than a series of smaller reports.
  • Re-Scanning: Once the developers start remediating vulnerabilities, you will be asked to re-scan to give a clean bill of health for each application.  You don’t want to have to wait the week or more an enterprise scan takes to update the development team.

The one downside to all of this is that you will have to kick off and monitor more scans.  If you have a large number of applications and this is likely to be a logistical headache, you should consider an enterprise portal to schedule and monitor scans and deliver scan results (full disclosure, we offer such a tool).

As in most endeavors, a bit of planning goes a long way in making life easier.  Giving some thought to breaking up your application scanning will make your application scanning program a lot easier and more effective.

Surviving the Week – 02/17/2012

The NTO team keeps growing and the demands of running the business and supporting our customers is keeping me busy… and its a blast. But now its good to be getting back to these weekly postings.

On to the news, so I can help keep you all informed about the important news in web app security.

  • Will a standardized system for verifying Web identity ever catch on? – Maybe the question is “Do we even want a standardized system for verifying Web Identity?” I for one see stuff like this everyday, and if the FBI’s site can be hacked, who is going to promise the security of OpenID? It will just become the single place an attacker has to attack to get access to everyone’s confidential/private data.
  • CSRF with upload – XHR-L2, HTML5 and Cookie replay – XHR-Level 2 calls embedded in an HTML5 browser can open a cross domain socket and deliver an HTTP request. Cross-domain calls will abide by CORS, but browsers end up  generating preflight requests to check policy and based on that, will allow cookie replay. Interestingly, multi-part/form-data requests will go through without the preflight check and “withCredentials” allow cookie replay. This is how some new cutting edge attacks are going to be performed.
  • Vote Now! Top Ten Web Hacking Techniques of 2011 – This is an incredibly useful survey that they do each year. So, please vote to help the community get an idea of what is interesting and important to you.
  • Twitter Enables HTTPS By Default – As sites like Google, Facebook and now Twitter start pushing all traffic to HTTPS, I fear that users will mistake this for real security. “Oh, I can put all my information on Facebook/Twitter/etc now because they are ‘secure’. See there is even a little padlock icon in my browser when I go to those sites, just like the bank.” – FAIL

Tales from the Web Scanning Front: Why is This Scan Taking So Long?

As CEO, I’m constantly emphasizing the importance of customer support and trying to attend several support calls each week to stay on top of our support quality and what customers are asking.

Surprisingly, application scan times are one of the most common issues raised by customers.  Occasionally, scans will take days or even weeks.

At this point, I would say that in almost all cases, there is an issue that lies within the application’s environment as opposed to a something within the software.

First some background on web application security scanners. Web scanners first crawl websites, enumerate attack points and then create custom attacks based on the site.  So, for example, if I have a small site with 200 attackable inputs and each one can be attacked 200 ways, with each attack requiring 2 requests, I have 200*200*2 or 80,000 requests to assess that site.

Now NTOSpider can be configured to use up to 64 simultaneous requests so depending on the response time from the server, you can run though requests very quickly.  Assuming, for example, 10 requests a second, that’s 600 per minute, 36,000 per hour and you can get through that site in 2.22 hours.

The problem is that quite often the target site is not able to handle 10 or even 1 request per second.  Some reasons can include:

  • Still in development – The site is in development and has limited processing power and/or memory.
  • Suboptimal optimization – The site is not built to handle a high level of traffic and this has not yet shown up in QA.  We were on the phone with a customer last month who allowed us to look at the server logs and we saw that one process involved in one of our requests was chewing up 100% of the CPU for 5 seconds.  Another application was re-adding every item to the database each time the shopping cart was updated (as opposed to just the changes) and our 5,000 item cart was severely stressing the database.
  • Middleware  Not to bash any particular vendor (Coldfusion) but some middleware is quite slow.

So let’s look at our 80,000 request example from above and assume that our site can only handle 1 request per second.  Our 2.2 hour scan time balloons to 22 hours.  For our 5 second response in bullet 2, we get to 4.6 days for our little site.  The good news is that NTOSpider can be configured to slow itself down so as to not DOS the site (this is our Auto-Throttle feature).  The bad news is that it will take some time.

So what’s a poor tester to do?

  • Beefier hardware  If you are budgeting for a web scanner,  consider spending a couple of extra thousand dollars on some decent hardware to test your apps. (Note – a modern laptop with optimal ram for the OS you are running – 32-bit OS = 4 Gigs of ram / 64-Bit OS = 8 Gigs of ram – will solve 90% of all performance issues.)
  • Scheduling  In some cases, you can schedule scans so that even if they are longer, you can still get things done in time.
  • Segmenting  In some cases, if you know that only a portion of the site has changed, you can target the scan to test only that subset and dramatically reduce scan time.
  • Code Augmentation  Not to put too fine a point on it, but if a single request is taking 5 seconds to process, a hacker can DOS your site by hand.  You might want the developers to look at adjusting the code.

 

Surviving the Week – 12/09/2011

Sorry I missed last week, this one will cover the last two weeks.