Tag Archives: NT OBJECTives

iphone image

Mobile Application Security 101

Mobile Applications – Still Insecure

Businesses are racing to meet the demands for mobile applications, yet mobile application security is an afterthought, just as web application security was when web applications started to proliferate.

As an industry, we know so much about securing web applications that applies to mobile, but most organizations are still repeating past mistakes and making new mobile specific mistakes that expose businesses to security incidents.

According to a recent Gartner report, “Most enterprises are inexperienced in mobile application security.  Security testing, if conducted at all, is often done casually — not rigorously — by developers who are mostly concerned with the functionality of applications, not their security.[1]” In this same report, the firm indicates that “through 2015, more than 75% of mobile applications will fail basic security tests.[2]

Friends-using-Foursquare-006

Don’t Forget Mobile Web Services

There has been so much talk about mobile device and mobile client security, but the key thing to keep in mind when approaching mobile application security is that it’s critical to test both the client as well as the communication to the web service that powers it. For example, if you’re using your Twitter app, the primary logic that resides on the mobile client is display and user authentication. The app must then communicate to a web service in order to get and send Tweets. This web service is the real power of Twitter and where the real security risk lies. Why attack one user, when you can attack that web service that is used by millions?

Even though mobile applications leverage a client-server model, they are built with entirely new technologies that necessitate new processes, technologies and skills.  While mobile application security does drive these new requirements, the overall problem is one that the security industry is already well acquainted with because the vulnerabilities showing up in mobile applications aren’t new at all. We often say that we are “Hacking like it’s 1999” because, the reality is that mobile vulnerabilities are are just the same old vulnerabilities that we have been hunting for over 13 years now: SQL injection, overflow, and client attacks.

These new requirements for mobile testing are driven by the new programming languages used for building mobile clients (Objective-C and Android’s Java variant), the new formats used by back-end web services (JSON and REST) and the new authentication and session management options (OAuth, HMAC, etc). And while those familiar SQL Injection attacks look almost exactly like they did 10 ago, you just can’t find them without understanding how to deliver these attacks within the new structures.

iphone image

SQL Injection Alive and Well

We call the mobile vulns the Where’s Waldo of application security. They’re your old familiar friend, SQL Injection, who looks almost exactly like he did 10 years before – maybe with a few gray hairs – but you just can’t find him as easily because he’s in an all new environment. We simply need to adjust to this new landscape and start looking for our old friend again.

Another important thing to keep in mind about mobile application security testing is that there ARE tools that automate the process. There just aren’t that many of them that automate the entire process or do it very well.

We see several categories of security vulnerabilities in mobile applications:

More on Mobile Application Security

 

[1] [2]Gartner Research Document

Gartner, Technology Overview: Mobile Application Security Testing for BYOD Strategies, By Joseph Feiman and Dionisio Zumerle, August 30, 2013.

B-Sides San Francisco

Why are we still vulnerable to side-channel attacks? (and why should I care?)

2013 B-Sides San Francisco Talk Summary Series B-Sides San Francisco

This was a great talk given by Jasper Van Woudenberg, from Riscure.

Whenever I attend these talks, I always include a couple that are pure indulgence to keep me awake, sustain my enthusiasm, and broaden my knowledge. At DefCon there was one about using quantum physics for random key generation and another one using GPUs for massively parallel password cracking. Schuyler Towne’s locks talks are always a joy, and this talk fits nicely into that category.  I really should say, “pure indulgence” is not entirely correct. While it is true that there will never be a one-domino causality chain from any of these indulgence talks I mentioned here to any security assessment code I might write for NTO, the stimulation of thought does seep into product and some things oblique to a particular software product like physics and numerical analysis do have a way of popping up in algorithms I write for the product.

What are side-channel attacks?

side channel attack

So first things first… I expect at least some of you, like me, had to look up “side-channel attacks.”  There have been side channel attacks in the news recently, like the one last year where, as published in ThreatPost, a side channel attack was used to steal a cryptography key from co-locoated virtual machines. Wikipedia defines a side channel attack as “any attack based on information gained from the physical implementation of a cryptosystem, rather than brute force or theoretical weaknesses in the algorithms(compare cryptanalysis).” Side channel attacks have to do with measuring fluctuations in hardware and then intuiting the behaviour of an algorithm running on that hardware. Or, monitoring something related to the information you are pursuing and then doing further analysis of the monitored information to tease out the desired information.

Obtain RSA key by monitoring power usage, Passive methods

The first example the speaker addressed was ascertaining an RSA key by monitoring power usage of the CPU executing the algorithm. The RSA encryption algorithm bottom lines to a sequence of squares and multiplies. But the multiplies are executed only for 1-bits in the key.  So what you see in the power graph is a sequence of spikes with time differentials between them that are proportional to whether or not a multiply was executed in that iteration and from this one can piece together the key.  The countermeasure is to do a dummy multiply when the key bit is zero so each iteration does a square and multiply. This of course increases the execution time of the algorithm but it is also not a sure thing; the dummy multiply is still slightly different from the actual multiply though you do have to try harder to get the data.  With this and other approaches the speaker discussed, a common denominator is that if you have alot of time with the device in question, you can simply do massive amounts of iterations and overwhelm subtleties with statistics.

Clarifying Statistics and Algorithms

Interesting related side note:  I knew a guy on a previous job who did astronomical photography involving multiple all-night exposures of the subject being photographed (a galaxy in his case).  It turns out that the more pictures you take of the same subject and then combine later, the more purturbances like atmospheric distortion are averaged out and the image becomes clearer.  Statistics in general works like this. The persistent factors become ever more emergent and pronounced and the error ever smaller the more samples you take.  Sometimes the algorithm such as ECDSA may power spike in such a way that you do not directly get the variable you are after but you get one of the variables in the formula and so with a bit of algebra and several iterations you can get what you are after. Also such things as the algorithm using 24 bit numbers and dealing with them 8 bits at a time can be used to analyse the power profile of the algorithm. Interestingly, the speaker said that even if the algorithm used 16 bit numbers, using an 8 bit approach gets you not as good but still usable correlations.

Side channel attacks – Active methods

That fairly accounts for the passive methods he discussed.  He then went on to discuss active methods.  These include glitching supply voltage, glitching the clock, and glitching the chip itself using powerful optical spikes.  A well placed supply glitch introduces errors in the execution of the algorithm that can yield information as to the data it was dealing with when it errored.  Clock glitches can cause the algorithm to skip instructions such as branches that can also produce useful data in the power signature.  Optical glitches target specific parts of the chip with electromagnetic interference (light is an EM wave) which, again, can yield information via how they affect the running of the algorithm.  Countermeasures to these techniques include inserting random waits before comparisons and doing multiple comparisons and requiring the results to be the same (being wary of compiler optimizations, i.e. turn them off).

As you would expect, these too can be circumvented but they make the attacker’s job harder.  The data one gets from glitched execution of a crypto algorithm can in some cases be analysed by lattice methods.  As the speaker said, he didn’t have time to fully elucidate this but in summary, one calculates a lattice and then calculate closest vector within that lattice (this is admittedly a glossover paraphrase of an admitted glossover to begin with) and it can be used to reconstruct crypto keys from the glitched and power-signatured algorithm.

This talk was most enjoyable to someone like me.  In security, it is always valuable to be made to think about unexpected ways to acquire information since of course the more clever of the attackers are doing that.  We have all noticed how computers have become orders of magnitude faster and more efficient.  What once took hundreds of dollars worth of Cray time and about as much electrical power can now be done on a $300 computer for “too cheap to meter” electrical power.  If you have ever designed anything around a 6502 chip, you know those old chips consume whatever power they consume nearly constantly regardless of what they are doing.  This is not to say the methods elucidated in this talk would not work on a 6502 but modern chips that throttle themselves according to what they are doing greatly help these methods along compared to the old chips.  The biggest software threat to security in the Apple-II days was getting a virus.  On a computer that was not connected to the internet or any other communications net, not running services that listen for commands to execute, and barely fast/capacious enough to run the one program it was running, one didn’t worry about security much.  But as we obsess on CSRF, XSS, SSL, SQLI, etc., we must remember that hardware has evolved with software and therefore hardware vulnerability has also evolved with software vulnerability.

 

Tales from the Web Scanning Front: Why is This Scan Taking So Long?

As CEO, I’m constantly emphasizing the importance of customer support and trying to attend several support calls each week to stay on top of our support quality and what customers are asking.

Surprisingly, application scan times are one of the most common issues raised by customers.  Occasionally, scans will take days or even weeks.

At this point, I would say that in almost all cases, there is an issue that lies within the application’s environment as opposed to a something within the software.

First some background on web application security scanners. Web scanners first crawl websites, enumerate attack points and then create custom attacks based on the site.  So, for example, if I have a small site with 200 attackable inputs and each one can be attacked 200 ways, with each attack requiring 2 requests, I have 200*200*2 or 80,000 requests to assess that site.

Now NTOSpider can be configured to use up to 64 simultaneous requests so depending on the response time from the server, you can run though requests very quickly.  Assuming, for example, 10 requests a second, that’s 600 per minute, 36,000 per hour and you can get through that site in 2.22 hours.

The problem is that quite often the target site is not able to handle 10 or even 1 request per second.  Some reasons can include:

  • Still in development – The site is in development and has limited processing power and/or memory.
  • Suboptimal optimization – The site is not built to handle a high level of traffic and this has not yet shown up in QA.  We were on the phone with a customer last month who allowed us to look at the server logs and we saw that one process involved in one of our requests was chewing up 100% of the CPU for 5 seconds.  Another application was re-adding every item to the database each time the shopping cart was updated (as opposed to just the changes) and our 5,000 item cart was severely stressing the database.
  • Middleware  Not to bash any particular vendor (Coldfusion) but some middleware is quite slow.

So let’s look at our 80,000 request example from above and assume that our site can only handle 1 request per second.  Our 2.2 hour scan time balloons to 22 hours.  For our 5 second response in bullet 2, we get to 4.6 days for our little site.  The good news is that NTOSpider can be configured to slow itself down so as to not DOS the site (this is our Auto-Throttle feature).  The bad news is that it will take some time.

So what’s a poor tester to do?

  • Beefier hardware  If you are budgeting for a web scanner,  consider spending a couple of extra thousand dollars on some decent hardware to test your apps. (Note – a modern laptop with optimal ram for the OS you are running – 32-bit OS = 4 Gigs of ram / 64-Bit OS = 8 Gigs of ram – will solve 90% of all performance issues.)
  • Scheduling  In some cases, you can schedule scans so that even if they are longer, you can still get things done in time.
  • Segmenting  In some cases, if you know that only a portion of the site has changed, you can target the scan to test only that subset and dramatically reduce scan time.
  • Code Augmentation  Not to put too fine a point on it, but if a single request is taking 5 seconds to process, a hacker can DOS your site by hand.  You might want the developers to look at adjusting the code.

 

Surviving the Week – 12/09/2011

Sorry I missed last week, this one will cover the last two weeks.

 

Announcing SQL Invader

Today, we announced SQL Invader, a new free GUI-based tool that enables testers to easily and quickly exploit a SQL Injection vulnerability, get a proof of concept with database visibility and export results into a csv file. In just a few clicks, users will be able to view the list of records, tables and user accounts on the back-end database.

Tools like this are still critical for comprehensive application security testing and can help organizations remain a step ahead of the bad guys. SQL Injection has been the dominant method used in this year’s high-profile web application attacks; with millions of sites attacked in 2011.

We created this tool because our customers and the community at large have expressed a need. We want to always contribute to the community as much as we can. Although SQL Injection is well documented and there are tools to discover the vulnerabilities, it has been very difficult to determine if the vulnerability can actually be exploited because most existing SQL Injection testing tools are executed from a command line, lack an intuitive user interface or are no longer supported.  Without the ability to clearly demonstrate the exploitability of a vulnerability, remediation efforts are often delayed and friction between security and development teams surfaces. We designed NTO SQL Invader so that penetration testers and developers can quickly and easily leverage a vulnerability to view the list of records, tables and user accounts on the back-end database.

SQL Invader works as a standalone solution or with NTOSpider and enables you to:

  • Paste the injectable request straight from an application scan report
  • Control how much information is harvested.
  • View data in an organized manner using tree control and data grids.
  • Leverage logging data in CSV file

 

“Perfect-Fit” Virtual Patching for WAF/IPS with NTODefend

Recently NT OBJECTives announced NTODefend and its ability to generate “perfect-fit” custom patches for WAF & IPS. This marketing term “perfect-fit” has been the cause of some questions. People are wondering how our “perfect-fit” rules differ from what other DAST vendors are doing, as well as solutions like ThreadFix (aka Vulnerability Manager) from Denim Group. Those who know me, know that I don’t like when vendors overstate their capabilities, and I make sure NTO does not do this either, so I think this term deserves some explanation.

The other solutions that are able to generate virtual patches work from pre-defined templates based on categories of attacks, such as SQL Injection, Cross-Site Scripting, OS Injection. So if a given input is vulnerable to SQL Injection, then the SQL Injection template will be used to generate a virtual patch for the vulnerable input.

NT Objectives’ approach differs in that NTODefend is able to generate rules based on deeper intelligence about the input. This extra information comes from two key features in NTOSpider:

  1. NTOSpider‘s input population technology works to determine the intended legitimate data. For example, the input population technology will determine if the input only accepts numbers, or is intended for a phone number, email address, street address, etc.
  2. NTOSpider’s attacking engines detail specifics about the attacks that worked, with information such as usable characters and escape sequences.

By leveraging details about the attacks, NTODefend can generate more specific and aggressive rules to function as counter-measures to the attacks that the input was vulnerable to. This can include making rules that only allow numerical values, or maybe blocking single quotes but not double quotes, or allowing parenthesis but not dashes. NTODefend can also decide which canned filters to include to make sure the input is well protected.

The key point is that each rule is generated custom to the input AND custom to the ways it can be exploited.

After installing the virtual patches into the solution, NTODefend provides the ability to re-test all the inputs with both attack traffic and good traffic (modifiable database included with each data type NTOSpider can detect). It then generates a report to show which of the good request and bad requests got blocked. This provides users with the ability to quickly understand how effective the virtual patches were and hopefully alerts them to any virtual patches that could be blocking good traffic.

We do not claim that these generated virtual patches will always be 100% accurate to all situations, but we are confident that they will be useful and that we provide solutions for users to quickly deal with discovered vulnerabilities.

I welcome discussion and questions on this topic.

Introducing Jim Broome

We caught a big one!
I’m proud to announce that my buddy Jim Broome has joined the NT OBJECTives team and will be a contributing to the blog and podcast.

Jim Broome, CISSP
Jim, an information security veteran with two decades of experience in the security industry, is joining as VP of Security Services. Jim’s role is to provide world-class SaaS based web security services through NTOSpider On-Demand while also providing leadership to the NTOLabs research and consulting teams.

Experience
Practice Manager – Accuvant LABS – Accuvant, Inc.
As one of Accuvant’s most seasoned security assessors, Mr. Broome performed innumerable consultative engagements including enterprise security strategy planning, risk assessments, threat analysis, application assessments, network assessments, penetration testing, and wireless security assessments for a large number of Fortune 500 clients. These clients came from a variety of markets, including manufacturers, telecommunications (cellular and traditional), public utilities, healthcare, financial services, and state governments.

Principal Security Consultant – ISS X-Force

Prior to joining Accuvant, Jim was a principal security consultant for Internet Security Systems (ISS) and a member of the X-Force penetration testing team. At ISS, he was responsible for providing technical leadership to the Western region consulting practice while performing his day-to-day duties of network assessments and penetration testing.

Directory of Network and Security Operations – Cavion.com

Before X-Force, he was the director of network operations for Cavion.com, a managed service provider exclusively for credit unions. At Cavion.com, Jim was responsible for managing the network operations staff and security organization while maintaining 99.999% uptime.

HouSecCon 2011 and B-Sides ATL Review

Last week was a travel week.
On Wednesday I was in Austin for some meetings, then headed to Houston for the second annual HouSecCon on Thursday. I have to say that I was blown away at how much bigger and better it was than last year (with the exception of the badges ;). My buddy Michael Farnum puts this thing on with a team of friends and they are doing an amazing job growing the event, and it was fun having a booth for NT OBJECTives and everyone loved our new shirts we were giving out.

This year MJ Keith (now with The Denim Group) was the keynote speaker. I was first introduced to MJ Keith at last years HouSecCon where he blew me away with his Bump hack in his “Pwn on the go!” talk, and I was glad to see him being given the headlining spot this year.

The talks were all great, with highlights from Michael Gough, Josh Sokol and Zac Hinkel. I did my “Not your granddad’s webapp” talk which seemed to go over well, if you missed it, you can watch the video.

On Friday I was in Atlanta for B-Sides Atlanta, which was a fun event. I didnt have as much time to sit in the talks, but the lockpick room was great and I tried to hang in the podcasters room, even though it was a little hard to engage in useful conversation. I wonder what it was like for those listening to the live stream.I didnt do a talk at this one, so I just spent my time meeting people and eating great southern food.

Comparing the two would be hard, because they were entirely different, so I will just say that I have a fun week at both cons and look forward to both next year.

Prince 1999

Hacking like it’s 1999

Time for a little trek down memory lane, and a move to starting striking out on next trail!

Back in the late 90’s I was only getting started in my life as a “hacker,” and quickly became amazed at the work L0pht was putting out, such as netcat and L0phtCrack. I remember reading about their appearance at the US Congress when it happened, and seeing a small clip of it on MTV’s True Life “I’m a Hacker” later that year.

Prince 1999

Over the years I have had an amazing journey, launching a security group in Fortis US, and then joining Foundstone around 2000. I got to be part of an AMAZING group over there. At the time it turned into a collection of the most insane talent and to this day some of the smartest people I run into all herald from that period at Foundstone.

I have had privilege of getting to know and becoming friends with a few of the guys from those videos, and continue to enjoy meeting these guys and learning from them, and hopefully teaching them a few thing ;)

These days, while running NTO Im having the fun of finding and hiring some of the guys that I think will be the next generation trail blazers, and building tools to aid in today and tomorrows hurdles. Its a blast and I have been thinking that the products to aid in the progress of security are just starting to hit their stride, and the next few years will enter us into another boom for our industry.

However, over the last few months as I have been doing most of the primary research into this whole Cenzic Patent mess, and putting together piles of prior art and having to dig around the community for who was doing what, and when… and remembering and researching the challenges of the day. I watch videos like L0pht at US Congress and the MTV True Life, and read papers and posts from the late 90’s… and I just continue to realize how little progress we have all made. A decade has brought us so much and so little progress.

There is nothing to do but to continue the fight, and continue trying to think about this differently. Maybe the next decade will prove better than the last. To that end, I am re-starting my podcast. I had abandoned my post on this site for too long. We all need to be trying to educate the next generation as much as possible, but to to just show them the current state of affairs, but to hopefully challenge them and instill in them the fun of it all.

So Im dusting off the Mightyseek sound files and mic and gonna get to it.

Talk to you all soon!