, 25 tweets, 6 min read Read on Twitter
Congratulations and thanks to @jurajsomorovsky, @NimrodAviram and @ic0nz1 who reported this to AWS in November last year. This was a really interesting find that took a lot of diving! Story ...
Reassuring standard practice tweet: if you're using AWS services or AWS to terminate TLS/SSL you don't need to do anything. Amazon s2n, our Open Source implementation of TLS, was not impacted (more about why later).
If you're using OpenSSL OpenSSL 1.0.x on your own instances, you're probably still not impacted, and if you are the risk is low, but go ahead and upgrade to OpenSSL 1.0.2r anyway! It was released this morning. O.k. story ...
Juraj, Robert, Nimrod built a scanning tool that scans the internet for padding oracles. TLS supports a number of ciphers suites, and the older CBC ones encrypt in fixed size blocks. If the data you're sending doesn't add up to a whole block size, some padding is added.
When a CBC record comes in, it can be malformed in 2 ways: the padding can be wrong, or the signature (aka the MAC) can be wrong. Because the TLS design got the order of padding and MACs the wrong way around, it's important that implementations don't reveal which is wrong.
If implementations do, this is called a padding oracle attack, and if an attacker can get an implementation to encrypt the same data repeatedly, and also modify the encrypted traffic, and also observe the difference, eventually they would be able to decrypt the data. Sounds bad!
So the scanning tool does something simple: it negotiates a CBC cipher suite, and makes a connection with a bad MAC and a connection with bad padding and looks for any difference. This is my favorite kind of science: actually go check the real world!
Not to brag, but AWS is popular. Scan us and you'll find literally millions of IPs that are terminating TLS/SSL, mostly on behalf of customers for their web applications, web services, and websites. Out of these, the scanning tool found at least 100s that showed differences.
In the report we got, it was a "strange" list: customer IPs running unknown software, load balancers that were in FIPS mode, load balancers running on old hardware, there didn't seem to be much in common.
We also have our own tests and monitoring for padding oracles. You can see s2n's here: github.com/awslabs/s2n/bl… , and re-running these checks would actually show no impact. Head-scratcher!
At this point Steven Collison and Andrew Hourselt from our TLS/SSL team started having to dive really deep to find out what was going on.
First we were able to determine that most of the IPs were using OpenSSL to terminate traffic. But it wasn't always impacted. In fact the really common OpenSSL users, software like nginx, Apache ... not impacted at all!
Tracing through the code of things that were impacted showed that the problem only happened when the application called SSL_shutdown() twice, even if there is a protocol error.
Calling SSL_shutdown() twice is normal when there's no problem with a connection, and it should be harmless in the error case, so it's understandable that some applications do it ... but thankfully it's not common.
The actual leak of info, whether it was a padding or MAC error, would effectively show up as a timing or connection close difference between these calls. Impacted applications would either seem to time out, or close connections, differently, depending on the error. Subtle.
O.k. so next question: why don't existing padding oracle tests find this? Well it turns out only to happen to zero byte records. Records that have no data in them. And the scanning tool happens to send zero byte records.
Zero-byte records aren't common: browsers don't send them afaict, and packet dumps seem to show that they are exceedingly rare: which makes sense, if you have no data to send, why would bother? So that's very re-assuring.
Next weird thing: the problem also happened if OpenSSL wasn't using AES-NI hardware acceleration. In practice this means it impacted 3DES (which people should have turned off for other reasons!) and older hardware.
This also explained why FIPS software appeared in the list, because FIPS software generally can't use AES-NI.
At this point, a lot of factors have to be combined: TLS sw would have to be coded in an uncommon way, using OpenSSL, negotiating older cipher suites, on older HW, with clients that send 0-byte records, and can be made repeat the same data over and over, with an active MITM.
But that makes it more interesting! How do we find and prevent even these kind of rarefied cases? Automation, like the scanning tool, is clearly critical - but can we do more at the point of code?
One thing I'm grateful for is that in s2n we kill connections on any error, and we do it in a way where s2n will completely refuse to interact with the connection after the error has happened. Just with a closed flag ...

s2n uses OpenSSL's libcrypto for the underlying cryptography, and the same issue in that code /could/ have caused impact within s2n were it not for that practice. Basically this check .... github.com/awslabs/s2n/bl…
Of course the impact still would have been small, because of the other factors, but I'm glad we have that check! Anyway, thanks again to the issue reporters, read their paper when it comes! and thanks for Andrew and Steven from the TLS team. That's it, unless AMA.
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Colm MacCárthaigh
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!

This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!