Profile picture
Adrian Sanabria @sawaba
, 42 tweets, 9 min read Read on Twitter
Alright, today is the day we get to find out if I was right or wrong about the level of dysfunction necessary for the failures that allowed the #Equifax breach to occur.

Why today? Because the House Oversight report has been released. Merry Christmas! oversight.house.gov/wp-content/upl…
Feel free to download a copy and read with me as I go through the report. I predicted 30 failed controls? I'm sure that was just a random number I threw out, but let's see how close I was. #Equifax
Also, for a refresher, this was my blog post about the breach. blog.savagesec.com/equifax-breach…

I also made a prediction that the breach would be financially lucrative for Equifax - that it was effectively the best lead gen event for the company EVER.
Starting on the report, the first thing I notice is that it is beautifully written. The executive summary is 3 pages and you can learn a LOT from it. If you do nothing else, grab this report and read pages 2-4. Then read it again. Then take notes. Then share with your colleagues.
As a reminder, Equifax's business model is to gather as much valuable and sensitive personal data as possible and to monetize that data. They were a HUGE target for cyberattacks and it's hard to imagine they weren't aware of it.
The report highlights this fact, actually, noting that CEO Richard Smith created an aggressive growth strategy and "boasted Equifax was managing 'almost 1200 times' the amount of data held in the Library of Congress every day"
If that doesn't paint a target on your company, I don't know what will.

Anyway. On to the security issues.
We start with the bottom line in the report:

"Equifax, however, failed to implement an adequate security program to protect this sensitive data. As a result, Equifax allowed one of the largest data breaches in U.S. history. Such a breach was entirely preventable."
We jump right into Struts - anyone that's read about this breach knows that a Struts vulnerability was the first door to open for attackers. Was Equifax aware of the issue? Absolutely.
Equifax's Global Threat and Vulnerability Management Team (GTVM) forwarded an alert about Struts to over 400 internal employees (likely a pre-existing distribution list for vuln announcements). As many orgs do, Equifax had an internal meeting about this specific vulnerability.
The email was sent 2 days after this vulnerability went public.
The email instructed Struts to be patched within 48 hours
The meeting was held a week after the email was sent.

Already, there is a red flag here.
Why hold a meeting about fixing a vuln *5 days after* everyone was required to fix it?

Because you know that no one actually did.
A 48 hour SLA is reasonable (slow, even) from a general standpoint, given that this is a critical RCE accessible from the public Internet.

However, you need a lot of internal political clout and organization in company that big to get things patched that quickly.
So, keep in mind, the attack didn't occur until almost TWO MONTHS after the meeting that occurred 9 days after the vulnerability was first publicly announced.

Equifax had a total of 2 months and one week to address this issue.
No one will be surprised to hear that Equifax had a lot of legacy systems. The one running a vulnerable version of Struts that got compromised, ACIS, was first built in the 1970s. I'm sure we'll learn more about "exceptions for ACIS" as we read into the details later.
The attack lasted 76 days and dropped dozens of web shells.

"How many web shells, Adrian?"

I'm glad you asked! 30 web shells. 30.

@viss found the answer for back when the details were first coming out from the Mandiant investigation
"WOW, Adrian, 30 web shells is a lot of web shells. That begs the question, how large was Equifax's Internet-facing footprint?"

Another good question! Equifax owned a massive ip space - 17,152 possible IPs, to be exact.
Okay, so massive pwnage at the edge of the network. What's next? Why, the attackers find a "file containing unencrypted credentials".

I know what you're thinking. NO WAY - does anyone really still do that these days???
The answer is NO, of course not! They did it 13 years ago though and all those files are still where 13-year-old files go and none of those passwords have been changed, because that would be a huge pain.
Forgive me - I'm guessing here - I'm sure we'll find more specifics in the body of the report. After all, WE'RE ONLY ON THE SECOND PAGE OF THE REPORT.

Any bets as to what kind of file it was? Here's a hint: it contained creds for 48 databases.
Personally, my money is on a db.properties file. So much handier to have ONE file for the entire org, right? I mean, your database in Brazil will never need access to that database in the UK, but it's just more convenient to publish a single file.
I may be getting a bit snarky.
9000 queries later, the attackers had found what they were looking for and were exfiltrating massive amounts of data from these 48 databases.

Why didn't Equifax notice the exfiltration? Well, honestly, most orgs aren't set up to detect data exfiltration on the wire.
But wait, Equifax WAS set up to detect that sort of activity! Why didn't they? I don't have the words, so I'll just quote directly.
"Equifax did not see the data exfiltration because the device used to monitor ACIS network traffic had been inactive for 19 months due to an expired security certificate. On July 29, 2017, Equifax updated the expired certificate and immediately noticed suspicious web traffic."
WOW. 19 months.

And that's because no one was formally responsible for certificate management internally. In an organization that owned over 17,000 routable IPs. Maybe it was just internal cert responsibility that got the hot potato treatment?
"At the time of the breach, however, Equifax had allowed at least 324 of its SSL certificates to expire."
I'm guessing they were doing some SSL inspection, which is why the certs were so important. I'm going to go out on a limb and say they probably shouldn't have been solely depending on packet inspection.
Even without decrypting traffic, they should have noticed massive amounts of data going to servers in China and Germany from unusual sources that don't normally send large amounts of data to those destinations. Netflow should have been enough, IMO.
Anyway, after fixing the certs, they IMMEDIATELY noticed the attack, proving they owned the tools to get the job done.

At this point, the executive summary gets into breach response and corporate accountability.
The underlying conclusion throughout the Equifax breach report is that:
1. Staff was AWARE of deficiencies
2. Proper processes, tools and policies existed
3. Lack of leadership and accountability allowed processes to fail, tools to fall into disrepair and policies disregarded.
I'm going to skim through the rest of the report for any other interesting tidbits you can bring into work tomorrow and wave around in people's faces*

*Use tact; don't literally do this, please. You can't help anyone when you're fired.
The GTVM email mentioned earlier notified everyone (literally hundreds of people) that the vuln was not just a 10/10, but was actively being exploited.

This was 2 months before the attack.
Equifax SCANNED for vulnerable versions of Struts, but found nothing.

Why?

They forgot to use the recursive flag and were just searching the root web directory with the tool.

I can imagine the commentary:
"WOW, this tool is FAST!"
"Yeah, good thing we're not vulnerable!"
Host monitoring failed to notice 'whoami' being run on webservers.

If your Java webapp starts randomly running 'whoami', that should be cause for alarm. Simple anomaly detection works here, as it does on Windows (CMD.exe or Powershell running as child of WINWORD.EXE = bad)
"Equifax's Emerging Threats team released a Snort signature rule...to detect Apache Struts exploitation attempts."

Wait, WHAT? Equifax had a dedicated Emerging Threats team???

"The Equifax Countermeasures team installed the Snort rule..."

A COUNTERMEASURES team also?!
Anyway, they failed to test this rule, as it failed to trigger during the attack. TEST YOUR CONTROLS. It blows my mind that so many security controls are deployed but never tested.

This is scriptwriting 101. I don't think a script of mine has EVER worked on the first try.
McAfee Vulnerability Manager was used to scan for the Struts vuln. Twice. It didn't find it. I seem to recall someone saying that they were hard to scan for, but honestly, if you're looking for one vuln, don't use a scanner.

@gossithedog used Google
Yet ANOTHER layer of diligence done by Equifax (good), but not done properly (bad). False negatives are a thing with vuln scanners - with a vuln this serious, I'd want a vulnerable server running in a lab to test my scanner against.
Furthermore, a reminder that network scanners are terribly unreliable in general. In this case, Equifax OWNED the systems they were scanning for Struts. Use host-based software or credentialed scans to detect file/software versions locally. No false positives or negatives.
The CSO reported to the Chief Legal Officer, who was referred to as the "Head of Security"
Alright - in total, I count 34 control and process failures that contributed to the #Equifax breach.

Perhaps 5 or so could have prevented the breach entirely. Many of the remaining 29 could have detected the breach in enough time to stop it.
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Adrian Sanabria
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member and get exclusive features!

Premium member ($30.00/year)

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!