Category Archives: 2013 B-Sides San Francisco

Christopher R Lew China Politics & Cyber Espionage

Chinese Corporate Cyber Espionage by Christopher R. Lew Ph.D

I attend some of the talks at security conferences for technical interest, others for political interest. This one, at 2013 B-Sides San Francisco was the latter and Mr. Christopher R. Lew, author of several Chinese history books, did not disappoint; it was immensely interesting. That morning I had been watching a news report in the hotel on Chinese espionage with various pundits debating the issue and one military official in particular underscoring the seriousness of the threat and how we as a nation need to get off our collective butt and respond to it. So that was great preparation for this talk.

B-Sides San Francisco

Historian Christopher R. Lew

Christopher R Lew China Politics & Cyber Espionage

I went into the talk prepped with the U.S. side of the issue and then the speaker gave the Chinese side. Mr. Lew is an academic Chinese historian with security knowledge. His education has given him a cultivated sense of where China is coming from historically and how that shapes that culture’s plan for its future survival. He started off with a boilerplate disclaimer about his opinions being his own and not intended to necessarily reflect those of his company or the United States so perhaps I should do the same.

The opinions of this talk represented here are solely those of M. J. Power and do not necessarily reflect the editorial views of NT OBJECTives or its affiliates, the United States, China, Earth, or the known or yet to be discovered Universe. I daresay God might agree with me, but only if the atheists are correct concerning His existence.Chinese political system and cyber espionage

Chinese Political System

The speaker started off talking about most people’s impression of the Chinese political system and how that impression is incorrect. Most people according to him think that the government has a lot of control but that there are more or less autonomous business and other entities that much like this country have to render unto Caesar but are otherwise, at least somewhat, self-determining. This is incorrect. The Chinese Communist Party (CCP) (not to be confused with the CCCP – Cyrillic abbreviation for the Soviet Union :->) controls everything.

Russian and Chinese Politics and Imports

Side note: More than idle Nerd humour in the parens there, it would be an interesting inquiry for any poli-sci student to compare and contrast the Russian and Chinese approaches to communism in the 20th century and the Russian and Chinese approaches to capitalism in the 21st century.  I can offer some firsthand knowledge.  Our company employs Russian emigrants and some actual citizens of Russia.  But we do not employ any Chinese nationals (that we know of).  

On the other hand, my home, like yours, is filled with stuff that was Made in China.  I even have a couple of items that were Hecho en China (thought I might have woken up in TJ with no memory of how I got there when I first saw that… both kindeys intact though).  My earliest memory of noticing and being kind of surprised that something I bought was Made in China was in 2,000 when I bought a 750MHz Win98 computer which I still have (Спасибо Китайский Народов для великам информацам технологиям).  If you want to buy something Made in Russia however, you have to do a bit of digging. Though I did see some cool stuff at the hobby store that was.

So back to the talk… the realities of the Chinese political system make it highly unlikely that corporate/militiary IT attacks by enterprising independent hackers for personal gain are going on. It is in fact being ordered by the CCP. Further, continuing the above point, the People’s Liberation Army of the Republic of China (PLA) and all the corporations are directly controlled by the CCP. The corporations present a conventional corporate Board of Directors sort of face when dealing with the rest of the world but that is a façade; the companies are indeed motivated by profit but their primary purpose is to serve and be under direct control of the government. The CCP is so ingrained in Chinese culture that one might as well say that they speak English when dealing with English-speaking clients but we suspect that behind the scenes they might be Chinese.

So, the Army, the corporations, everything, is part of the Chinese Communist Party.  Therefore any cyber-espionage would have to be tolerated by said government. “Tolerated” being the conciliatory way of saying instigated by it. As far as the citizens are concerned, the state filters what you see and do. Sort of like блат in Soviet Russia or, “it’s not what you know but who you know,” in this country, the Chinese citizens know the game (2 steps forward, 1 step back) and have ways of dealing within the system. For example, after a PLA employee does his/her prescribed work in the prescribed hours, if he/she greases the right palms, he/she can then use the state equipment (truck, computer, etc.) for personal projects in off hours.  This would seem to contradict what has been said so far but not really. These personal projects are not going to scale very large or get very far.

A thread that ran throughout the talk was that of ethical justification. China, and specifically the dictatorship government of China, is engaged in military and commercial oriented cyber-espionage and rather brazen and unapologetic about it. Theirs is basically a “cost of doing business” argument. That is, espionage is simply something that great powers do. It also stinks of “boys will be boys” insofar as it is a macroscopic version of that microscopic copout. I recall reading that at Nuremberg when Göring was first captured he was rather jubilant and jovial towards his captors basically taking for granted that as a head of state he would naturally be accorded certain courtesies and spared the culpability that is necessary to impose upon the lower classes, based on the idea that the Nazi government was simply doing what all governments do.

As the proceedings continued and it became ever more evident that he would be held accountable, this changed and ultimately he bit a cyanide capsule and cheated justice. In that earlier time though, a journalist asked him how one prosecutes a state such as Nazi Germany and he said something to the effect of, “indeed, how are you going to get the farmer to put down his hoe and go off and fight possibly to the death… you do it with slogans, rousing anthems, pomp and circumstance.” The journalist betrayed his (or maybe it was her) own conceits by then saying, “sure, in a dictatorship but not in a democracy.” To which Göring sardonically replied, “same in any state, democracy, dictatorship, whatever.” So pardon my, I think, relevant all-states-are-basically-the-same diversion… back to the speaker’s thesis:  basically it is upon us (the United States) to let them know that cyber-espionage is unacceptable by fighting/preventing it.

Made in China1Made in China3

So all the above is the what, what is the why? The Chinese government sees the future of the country as depending on double digit economic growth, continued growing of the middle class, and maintaining a strong military. Ultimately they want the rest of the world to have to come to them for any industry be it green energy, IT, biotech, whatever. No great surprise there either; that is what every nation wants. Their strategy for leapfrogging the rest of the world and particularly the West is indigenous innovation wherever possible with espionage to fill in the cracks. Espionage of both a military and commercial nature. The speaker implied that ideology is giving way to materialism. This is an interesting point of view to someone like me as I have come to regard ideology (any ideology) as nothing more than a wealth hoarding strategy. That confirms the speaker’s position but from the other direction. I might say, “if you can’t feed them food and material goods, feed them bullshit,” to the speaker’s, “if you can’t bullshit them (anymore), give them food and material goods.”

Further, the speaker has noted that there is always a big picture to the Chinese espionage.

If we enlarge our view to encompass the forest, we will see that each individual tree (act of espionage) is part of a coordinated effort to increase the efficacy of Chinese industry and military might. One act of espionage can and often does facilitate another. Supply chain dynamics prescribe the attack strategy and coordination. An example is the recently unveiled Chengdu J-20 stealth fighter. It looks a good deal like an F-22 Raptor with canards. No coincidence… it was built in part from espionage of Lockheed/Martin and has Russian engines.  It is interesting to review the Cold War for some insights here.


America’s preoccupation with the USSR was primarily military in nature and that was the sort of espionage about which the US principally worried.  At Farnborough in 1989, an article in Flying magazine declared that, “these latest examples of Soviet aeronautical engineering (AN-225, MiG-29, Su-27) dispel the notion that Soviet military aircraft are simply Fred Flintstone copies of Western designs.”  That sums it up… the biggest threat from the USSR was not when they were copying our stuff but when they were innovating.  Further supporting that thesis, it is common knowledge that Stalin had spies in the Manhattan Project but it seems to be not common knowledge that their espionage, while comprehensive, was of strategic value in simply knowing the existence of the American atom bomb and not so much how it is made.

The Soviet scientists did build their first bomb with the espionage knowledge because they didn’t want to risk getting Stalinated if their bomb design didn’t work.  But they had their own design which did, in fact, work. The China problem is more complex. The threat is military and industrial and the corporate espionage weakens the US a lot more than the Soviets building airplanes that look like B-29’s, Vickers VC-10’s, F-111’s, etc. The economic/cultural threat is much more profound and has the potential to resonate through much more of the future than the military threat. The Cold War had a life limit measured in decades because, though there was a lot of ideological posturing, the conflict was primarily conducted in a pugilistic manner.  Evaluating both China and the EU, the United States strategy for long term survival needs to be continued innovation and careful protection of its intellectual property.

Dan Hubbard 2013 B-Sides SF copy

How predominant is Cross site request forgery (CSRF)?

B-Sides San Francisco

Continuing my series on the talks I attended at 2013 Security B-Sides, this one from Dan Hubbard (CTO OpenDNS) and Frank Denis (@thinkumbrella) called, “Building a Security Graph” demonstrated some clever analysis and insights. The OpenDNS team leveraged the massive amount of free data coming to them from machines all over the internet issuingDan Hubbard 2013 B-Sides SF copy DNS requests to OpenDNS to analyze the security posture of the internet.

For the benefit of any non-Nerds who may have drifted in, DNS is the service on the internet that translates names (i.e. to IP addresses that the computers want. In their own words, “At OpenDNS, terabytes of data flow in and out everyday.” They have applied creativity and solid data science skills to transform the data using into security discoveries, predictive intelligence and tools.

They took the data and constructed various visualizations of the data and did statistical analysis of it in order to get a feel for the prevalence of vulnerabilities out there in the wild. The answer, not surprisingly, is that there is rather a lot of questionable activity going on. On their website, they note about 0.1% of all queries are infected. When you visit, OpenDNS’ website, you will see two meters on the bottom right hand side of the home page, one for the number of requests they have received and another for the number of infected requests.

How predominant is Cross-Site Request Forgery (CSRF)?

As the data to which they have access is the name requests, that shapes the sort of analysis they can do with regard to security assessment.  Any attack that involves some other domain (i.e. attacker) will show up in the data as domain correlations. CSRF is an obvious example. Any attack where you have to see the guts of the request/response traffic in order to assess it as such will presumably not be amenable to their analysis.

They messed about with mathematical correlations for ascertaining such information as CSRF vulnerability and did topological/statistical analysis of the internet as it was presented to them by this huge body of DNS requests. CSRF (Cross Site Request Forgery) involves tricking the user/browser into issuing requests to another domain besides the one to which they think they have connected (this other domain being the attacker’s website). So by analysing the pattern of DNS requests, one can presumably see patterns of requests that strongly suggest CSRF going on, i.e. correlations of requests to one domain followed immediately by requests to another.  OpenDNS does not see the actual guts of the CSRF attack; they just see name requests that strongly imply its existence.

Finding CSRF vulnerabilities & protecting CSRF sites

If you are looking for some information on how to find CSRF in your applications, there is a section on that in this whitepaper.

More info

I have to confess, the coffee wasn’t kicking in just yet when I was attending this one and so I cannot offer any very extensive mathematical or other analysis of it. I can say simply that it was interesting to see the graphs they did of internet topology and number of requests. You can learn more on their website and blogs.

Domain Generation Algorithms

One of the points that leapt out at me was the issue of domain generation algorithms. I hadn’t really thought of that. When speaking of names, one thinks of such things as load balancing, squatting, running out of IPv4 addresses, stuff like that. I should have thought of that simply by looking at the various auto-generated caller-ID’s I see in the 6 or 7 phone spam calls I get every day.


B-Sides San Francisco

Why are we still vulnerable to side-channel attacks? (and why should I care?)

2013 B-Sides San Francisco Talk Summary Series B-Sides San Francisco

This was a great talk given by Jasper Van Woudenberg, from Riscure.

Whenever I attend these talks, I always include a couple that are pure indulgence to keep me awake, sustain my enthusiasm, and broaden my knowledge. At DefCon there was one about using quantum physics for random key generation and another one using GPUs for massively parallel password cracking. Schuyler Towne’s locks talks are always a joy, and this talk fits nicely into that category.  I really should say, “pure indulgence” is not entirely correct. While it is true that there will never be a one-domino causality chain from any of these indulgence talks I mentioned here to any security assessment code I might write for NTO, the stimulation of thought does seep into product and some things oblique to a particular software product like physics and numerical analysis do have a way of popping up in algorithms I write for the product.

What are side-channel attacks?

side channel attack

So first things first… I expect at least some of you, like me, had to look up “side-channel attacks.”  There have been side channel attacks in the news recently, like the one last year where, as published in ThreatPost, a side channel attack was used to steal a cryptography key from co-locoated virtual machines. Wikipedia defines a side channel attack as “any attack based on information gained from the physical implementation of a cryptosystem, rather than brute force or theoretical weaknesses in the algorithms(compare cryptanalysis).” Side channel attacks have to do with measuring fluctuations in hardware and then intuiting the behaviour of an algorithm running on that hardware. Or, monitoring something related to the information you are pursuing and then doing further analysis of the monitored information to tease out the desired information.

Obtain RSA key by monitoring power usage, Passive methods

The first example the speaker addressed was ascertaining an RSA key by monitoring power usage of the CPU executing the algorithm. The RSA encryption algorithm bottom lines to a sequence of squares and multiplies. But the multiplies are executed only for 1-bits in the key.  So what you see in the power graph is a sequence of spikes with time differentials between them that are proportional to whether or not a multiply was executed in that iteration and from this one can piece together the key.  The countermeasure is to do a dummy multiply when the key bit is zero so each iteration does a square and multiply. This of course increases the execution time of the algorithm but it is also not a sure thing; the dummy multiply is still slightly different from the actual multiply though you do have to try harder to get the data.  With this and other approaches the speaker discussed, a common denominator is that if you have alot of time with the device in question, you can simply do massive amounts of iterations and overwhelm subtleties with statistics.

Clarifying Statistics and Algorithms

Interesting related side note:  I knew a guy on a previous job who did astronomical photography involving multiple all-night exposures of the subject being photographed (a galaxy in his case).  It turns out that the more pictures you take of the same subject and then combine later, the more purturbances like atmospheric distortion are averaged out and the image becomes clearer.  Statistics in general works like this. The persistent factors become ever more emergent and pronounced and the error ever smaller the more samples you take.  Sometimes the algorithm such as ECDSA may power spike in such a way that you do not directly get the variable you are after but you get one of the variables in the formula and so with a bit of algebra and several iterations you can get what you are after. Also such things as the algorithm using 24 bit numbers and dealing with them 8 bits at a time can be used to analyse the power profile of the algorithm. Interestingly, the speaker said that even if the algorithm used 16 bit numbers, using an 8 bit approach gets you not as good but still usable correlations.

Side channel attacks – Active methods

That fairly accounts for the passive methods he discussed.  He then went on to discuss active methods.  These include glitching supply voltage, glitching the clock, and glitching the chip itself using powerful optical spikes.  A well placed supply glitch introduces errors in the execution of the algorithm that can yield information as to the data it was dealing with when it errored.  Clock glitches can cause the algorithm to skip instructions such as branches that can also produce useful data in the power signature.  Optical glitches target specific parts of the chip with electromagnetic interference (light is an EM wave) which, again, can yield information via how they affect the running of the algorithm.  Countermeasures to these techniques include inserting random waits before comparisons and doing multiple comparisons and requiring the results to be the same (being wary of compiler optimizations, i.e. turn them off).

As you would expect, these too can be circumvented but they make the attacker’s job harder.  The data one gets from glitched execution of a crypto algorithm can in some cases be analysed by lattice methods.  As the speaker said, he didn’t have time to fully elucidate this but in summary, one calculates a lattice and then calculate closest vector within that lattice (this is admittedly a glossover paraphrase of an admitted glossover to begin with) and it can be used to reconstruct crypto keys from the glitched and power-signatured algorithm.

This talk was most enjoyable to someone like me.  In security, it is always valuable to be made to think about unexpected ways to acquire information since of course the more clever of the attackers are doing that.  We have all noticed how computers have become orders of magnitude faster and more efficient.  What once took hundreds of dollars worth of Cray time and about as much electrical power can now be done on a $300 computer for “too cheap to meter” electrical power.  If you have ever designed anything around a 6502 chip, you know those old chips consume whatever power they consume nearly constantly regardless of what they are doing.  This is not to say the methods elucidated in this talk would not work on a 6502 but modern chips that throttle themselves according to what they are doing greatly help these methods along compared to the old chips.  The biggest software threat to security in the Apple-II days was getting a virus.  On a computer that was not connected to the internet or any other communications net, not running services that listen for commands to execute, and barely fast/capacious enough to run the one program it was running, one didn’t worry about security much.  But as we obsess on CSRF, XSS, SSL, SQLI, etc., we must remember that hardware has evolved with software and therefore hardware vulnerability has also evolved with software vulnerability.


Twitter SSL

Secure SSL, “Tales of Transport Layer Security at Twitter” from 2013 B-Sides San Francisco

SSL++; Tales of Transport Layer Security at Twitter

I am happy to have attended this talk, at 2013 B-Sides San Francisco, by @jimio, a Twitter employee, on SSL security and how to create a secure SSL site. The title was  “SSL++ : Tales of Transport Layer Security at Twitter” and it was definitely a good way to wake up and start the day. Twitter was able to switch to exclusive-SSL and netted out to a faster site with SSL. In this talk, he discussed why and how.

Twitter SSL


First point:  I am indebted to the speaker for prompting me to do a bit of reading about the CRIME and BEAST SSL/TLS attacks. I am primarily a software architect but of course at each job on my resumé I have picked up very interesting domain knowledge and crypto is full of things like CRIME and BEAST that do not occur to you as you use or design a crypto algorithm.  To summarize for the benefit of those who need it (and presage a little some of the similar inject-then-diagnose approaches to acquiring crypto keys I will be writing about w.r.t. other talks I attended), the CRIME attack works by injecting content into TLS compressed headers (or indeed it is useful for any encrypted compressed information) and then observing the resulting size of the compressed information relying on the fact that the compression algorithm economizes on repeats.  That is, if your injected content causes the size to increase then it is probably not in the original content.  If the size does not increase (or very little), it probably is in the content.  So one can guess and hone in on the compressed content without having to know the crypto key.  BEAST works by injecting content that is 15 bytes, then 14, then 13, … down to zero so that at each iteration the last byte of the content is the only unknown byte and one only has to brute force 256 combinations rather than 2^128.  This reminds me of Schuyler Towne’s talk about how to get into those Base-10 suitcase locks.  Typically a session cookie is being pursued with this attack.

Transport Layer Security at Twitter

Okay, there’s the preamble. The balance of this talk was about not so much about exotic SSL vulnerabilities like those discussed above, but simply vulnerabilities stemming from not thoroughly using SSL.  Sometimes this can mean the login page is in SSL (lovely, protects password) but the cookie is in cleartext (bollocks).  So it needs to be SSL everywhere.  Twitter instituted such a change at one point and gave customers the ability to opt out and about 1% did.  However, even when you think you are fully SSL, there are still CSRFish things people can do like <img src=””> which can prompt GETs over HTTP thereby revealing the user’s cookie even if the response is innocuous.  The speaker discussed man in the middle attacks though not of what you the reader are likely to have been hearing about lately but the simpler variety of intercept the SSL and broker it as HTTP to the server and thereby read all the content unencrypted.  Again, the countermeasure here is absolutely airtight SSL on the site.  And then there are things like #!/dir or anything similar where everything past the # does not get sent to the server and is instead processed with client side script.  That one actually transcends the thesis of this talk.  Certainly it is an SSL issue but it is a whole-bunch-of-other-things issue as well.  Prior to working in information security, I worked at a company where we were doing loads of this kind of stuff in a web application and also calculating cookies in client-side jsp (!)… 13 years ago… more naive times.  The management hired a security firm to audit and that is how we found out about this stuff.  We weren’t developing an E-commerce site, it was more of an internal-use site but of course one wants to be secure even in that environment.

Every request should be SSL

The overall goal is to get all requests internal and external to your site to be SSL.  Obviously you can control the former but not fully the latter.  So you can do the best you can on the latter.  For example, canonical linkrel always with an https.  Google’s crawlers respect this but Bing and Yahoo don’t.  There is some partisanship apparently that it is unseemly to use linkrel in this fashion (it is not canonical to use canonical this way :-)?) but as you can imagine, the speaker rejects such arbitrary religious arguments as do I.  Then there is the issue of people not typing fully qualified links with protocol into their browsers (it’s been a while since 1992 after all).  Of course you expect any browser to GET but interestingly Twitter apparently convinced Chrome developers to put an “if (it is twitter) {assume HTTPS}” line in their code.  More measures to encourage clients to request nothing but SSL include the <strict-transport-security> tag and CSP.

Pros & Cons of Cert-Pinning

At this point he spoke about cert-pinning which I wrote up extensively with regard to another talk so suffice to say, it is a good idea wherever feasible.  Mobile apps were the focus of that other talk and the disadvantage to cert pinning was redeployment of all in-the-field apps to use the new baked-in cert when the cert needs to be changed.  These would be things like standalone games that communicate with a server.  So if you are building a web application that is exclusively used as such and is therefore inherently self-deploying, that concern is lessened though I suppose it requires savvy users/browsers to maintain client-side trusted  certs and not capriciously ok new ones.

Performance issues with encrypted SSL, Not Really

The speaker concluded by addressing performance considerations of going exclusively encrypted.  In short, he said optimize other areas of your website to buy back the performance lost by going SSL, which is not that significant to begin with.  The advantages far outweigh the liabilities of performance.  Further, his company (Twitter) is a case in point.  They cleaned up their code as part of the switch to exclusive-SSL and netted out to a faster site with SSL.

I’m finding that a common denominator in a lot of these talks is “the more things change the more they stay the same” and possibly “there is one (web developer) born every minute.”  The exotic sexy (in the nerd sense) vulnerabilities command our attention as we want to stay ahead of the bleeding edge but the old vulnerabilities (particularly as they combine with new ones) keep resurfacing and constant vigilance implies remembering them as much as it does staying abreast of new developments. Our CEO, Dan Kuykendall likes to refer to it as Where’s Waldo (link to blog post) or Leisure Suit Larry. They same old things just keep popping up in new places.