Tag Archives: Application Security

Twitter SSL

Secure SSL, “Tales of Transport Layer Security at Twitter” from 2013 B-Sides San Francisco

SSL++; Tales of Transport Layer Security at Twitter

I am happy to have attended this talk, at 2013 B-Sides San Francisco, by @jimio, a Twitter employee, on SSL security and how to create a secure SSL site. The title was  “SSL++ : Tales of Transport Layer Security at Twitter” and it was definitely a good way to wake up and start the day. Twitter was able to switch to exclusive-SSL and netted out to a faster site with SSL. In this talk, he discussed why and how.

Twitter SSL


First point:  I am indebted to the speaker for prompting me to do a bit of reading about the CRIME and BEAST SSL/TLS attacks. I am primarily a software architect but of course at each job on my resumé I have picked up very interesting domain knowledge and crypto is full of things like CRIME and BEAST that do not occur to you as you use or design a crypto algorithm.  To summarize for the benefit of those who need it (and presage a little some of the similar inject-then-diagnose approaches to acquiring crypto keys I will be writing about w.r.t. other talks I attended), the CRIME attack works by injecting content into TLS compressed headers (or indeed it is useful for any encrypted compressed information) and then observing the resulting size of the compressed information relying on the fact that the compression algorithm economizes on repeats.  That is, if your injected content causes the size to increase then it is probably not in the original content.  If the size does not increase (or very little), it probably is in the content.  So one can guess and hone in on the compressed content without having to know the crypto key.  BEAST works by injecting content that is 15 bytes, then 14, then 13, … down to zero so that at each iteration the last byte of the content is the only unknown byte and one only has to brute force 256 combinations rather than 2^128.  This reminds me of Schuyler Towne’s talk about how to get into those Base-10 suitcase locks.  Typically a session cookie is being pursued with this attack.

Transport Layer Security at Twitter

Okay, there’s the preamble. The balance of this talk was about not so much about exotic SSL vulnerabilities like those discussed above, but simply vulnerabilities stemming from not thoroughly using SSL.  Sometimes this can mean the login page is in SSL (lovely, protects password) but the cookie is in cleartext (bollocks).  So it needs to be SSL everywhere.  Twitter instituted such a change at one point and gave customers the ability to opt out and about 1% did.  However, even when you think you are fully SSL, there are still CSRFish things people can do like <img src=”http://twitter.com”> which can prompt GETs over HTTP thereby revealing the user’s cookie even if the response is innocuous.  The speaker discussed man in the middle attacks though not of what you the reader are likely to have been hearing about lately but the simpler variety of intercept the SSL and broker it as HTTP to the server and thereby read all the content unencrypted.  Again, the countermeasure here is absolutely airtight SSL on the site.  And then there are things like #!/dir or anything similar where everything past the # does not get sent to the server and is instead processed with client side script.  That one actually transcends the thesis of this talk.  Certainly it is an SSL issue but it is a whole-bunch-of-other-things issue as well.  Prior to working in information security, I worked at a company where we were doing loads of this kind of stuff in a web application and also calculating cookies in client-side jsp (!)… 13 years ago… more naive times.  The management hired a security firm to audit and that is how we found out about this stuff.  We weren’t developing an E-commerce site, it was more of an internal-use site but of course one wants to be secure even in that environment.

Every request should be SSL

The overall goal is to get all requests internal and external to your site to be SSL.  Obviously you can control the former but not fully the latter.  So you can do the best you can on the latter.  For example, canonical linkrel always with an https.  Google’s crawlers respect this but Bing and Yahoo don’t.  There is some partisanship apparently that it is unseemly to use linkrel in this fashion (it is not canonical to use canonical this way :-)?) but as you can imagine, the speaker rejects such arbitrary religious arguments as do I.  Then there is the issue of people not typing fully qualified links with protocol into their browsers (it’s been a while since 1992 after all).  Of course you expect any browser to GET http://www.twitter.com but interestingly Twitter apparently convinced Chrome developers to put an “if (it is twitter) {assume HTTPS}” line in their code.  More measures to encourage clients to request nothing but SSL include the <strict-transport-security> tag and CSP.

Pros & Cons of Cert-Pinning

At this point he spoke about cert-pinning which I wrote up extensively with regard to another talk so suffice to say, it is a good idea wherever feasible.  Mobile apps were the focus of that other talk and the disadvantage to cert pinning was redeployment of all in-the-field apps to use the new baked-in cert when the cert needs to be changed.  These would be things like standalone games that communicate with a server.  So if you are building a web application that is exclusively used as such and is therefore inherently self-deploying, that concern is lessened though I suppose it requires savvy users/browsers to maintain client-side trusted  certs and not capriciously ok new ones.

Performance issues with encrypted SSL, Not Really

The speaker concluded by addressing performance considerations of going exclusively encrypted.  In short, he said optimize other areas of your website to buy back the performance lost by going SSL, which is not that significant to begin with.  The advantages far outweigh the liabilities of performance.  Further, his company (Twitter) is a case in point.  They cleaned up their code as part of the switch to exclusive-SSL and netted out to a faster site with SSL.

I’m finding that a common denominator in a lot of these talks is “the more things change the more they stay the same” and possibly “there is one (web developer) born every minute.”  The exotic sexy (in the nerd sense) vulnerabilities command our attention as we want to stay ahead of the bleeding edge but the old vulnerabilities (particularly as they combine with new ones) keep resurfacing and constant vigilance implies remembering them as much as it does staying abreast of new developments. Our CEO, Dan Kuykendall likes to refer to it as Where’s Waldo (link to blog post) or Leisure Suit Larry. They same old things just keep popping up in new places.

Tales from the web scanning front: Don’t eat the entire buffet at once

One of the more common problems that we see is customers trying to bite off more of their application infrastructure at once than they can chew.  A certain amount of planning will yield better, more digestible results with substantially less indigestion.

Dropping all of acme.com into your web scanner when there are 100 applications with 50,000 pages across 60 subdomains is likely not an optimal strategy.  Here are some considerations:

  • Scan time:  Assuming reasonable connectivity and application server horsepower, a scan of a medium-sized application can take 3- 12 hours.  Scanning 60 applications at once will take a week or more before the scan completes and you can start working on the results.
  • Information Segmentation:  Most enterprises will have more than one development team.  It’s not the best policy to ship detailed information about all of your vulnerabilities to people who don’t need to know it.  Also, it’s much easier to have one report per application that you can just send to the team coding it so that they can fix just the vulnerabilities listed in the report.
  • Report Size:  A scan that large will create a report that will be immense if you have any significant number of findings.  Even if your vendor segments and paginates the report, it is going to be harder to navigate than a series of smaller reports.
  • Re-Scanning: Once the developers start remediating vulnerabilities, you will be asked to re-scan to give a clean bill of health for each application.  You don’t want to have to wait the week or more an enterprise scan takes to update the development team.

The one downside to all of this is that you will have to kick off and monitor more scans.  If you have a large number of applications and this is likely to be a logistical headache, you should consider an enterprise portal to schedule and monitor scans and deliver scan results (full disclosure, we offer such a tool).

As in most endeavors, a bit of planning goes a long way in making life easier.  Giving some thought to breaking up your application scanning will make your application scanning program a lot easier and more effective.

NT OBJECTives Positioned in the “Visionaries” Quadrant of the Magic Quadrant for Dynamic Application Security Testing (DAST)

Recent Gartner research positioned NT OBJECTives in the Visionaries quadrant for Dynamic Application Security Testing(DAST).(i) Gartner’s report was published in December and is now available to all Gartner subscribers.

Analysts Neil MacDonald and Joseph Feiman state in the report that “Dynamic Application Security Testing (DAST) solutions should be considered mandatory to test all Web-enabled enterprise applications, as well as packaged and cloud-based application providers.” They go on to note that “the market is maturing, with a large number of established providers of products and services.”(ii)

We consider our positioning in the “Visionaries” quadrant by Gartner confirmation of our mission and ability to deliver technologies and services that solve today’s toughest application security software challenges. Web application security represents one of the greatest security challenges facing the information technology industry today. We will continue to innovate and deliver the products today’s security teams need. In the months ahead, we are excited to launch a number of products that will further enhance our market position and help our customers.

In the report, MacDonald and Feiman also note that “as organizations have improved the security of their network, desktop and server infrastructures, there has been a shift to application-level attacks as a way to gain access to the sensitive and valuable information they handle, or to use a breach of an application to gain access to the system underneath. In addition, there has been a shift in attacker focus from mass “noisy” attacks to financially motivated, targeted attacks. As a result of these trends, application security has become a top investment area for information security organizations, whether improving the security of applications developed in-house, procured from third parties or consumed as a service from cloud providers.”(iii)
Gartner clients may view a copy of the Magic Quadrant for Dynamic Application Security Testing (DAST) report via Neil MacDonald’s blog, “The Market for Dynamic Application Security Testing is Anything but Static”.

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

About NT Objectives
NT OBJECTives, Inc brings together an innovative collection of experts in information security to provide a comprehensive suite of technologies and services to solve today’s toughest application security challenges. NT OBJECTives solutions are well known as the most comprehensive and accurate Web Application security solutions available. NT OBJECTives is privately held with headquarters in Irvine, CA.

(i) Gartner “Magic Quadrant for Dynamic Application Security Testing” by Neil MacDonald and Joseph Feiman, December 27,2011
(ii) Gartner “Magic Quadrant for Dynamic Application Security Testing” by Neil MacDonald and Joseph Feiman, December 27,2011
(iii) Gartner “Magic Quadrant for Dynamic Application Security Testing” by Neil MacDonald and Joseph Feiman, December 27,2011