Profile picture
Jake Williams @MalwareJake
, 11 tweets, 3 min read Read on Twitter
Just had some fun at the office. An esx server had an issue earlier today and crashed. Admin brought everything back up and powered on all the VMs. Everything looked good. I went to use a VM and can't get to it. Can't ping it. Nmap to it and something's there. 1/n
Problem is that it's not the right something. I ask our admin and he checks the VM from the ESX server console. He says "it shows a duplicate IP." This is a problem because it's a static assignment - so what has my IP?! 2/n
Also, I hear someone in the SOC say "oh <expletive deleted>! Guys, we've got a problem!" They saw the nmap scan and alerted on it immediately. It looked super sketch because of how I did it. Bottom line, I'm happy I have our SOC for our customers AND watching us. 3/n
So we identify the rogue asset's MAC address, dump switch MAC tables, and figure out the conflict is on the ESX server itself. Huh?! It turns out we still had a few legacy VMs on the ESX server that the admin powered on in a rush. One of them had the same static assignment. 4/n
Obviously we updated documentation on what VMs to power on after an event (ESX crashes thankfully don't happen every day), changed some IPs, and even got rid of two legacy VMs. 5/n
But there's more to this story. We run asset inventory daily, so the legacy VMs would have been found and reported in the next 24 hours. But is that okay? My legacy VMs are obviously not being patched since they were powered off. Obviously these are vulnerable. 6/n
Given that the SOC saw my nmap scan, I'm confident that they would have seen any exploit attempts as well. Now, guess the number of organizations our size with 24*7 monitoring? Yeah, that number is pretty tiny. This event has me thinking more about security exposures. 7/n
We know that APT actors are trying to compromise security service providers to get to their customers. I got a first-hand chance today to see how easy it is for even a security minded shop to end up with a pretty massive internal exposure. So what's the takeaway? 8/n
1. Change management. Every legacy VM is a ticking time bomb waiting to be powered on. We know this from both our penetration tests and our IR work.
2. Monitoring saves your butt. I wouldn't have seen this 3 years ago (and I'm horrified by that). I'm happy I see it today. 9/n
Let my close by saying log review is NOT a substitute for monitoring. In this case, the only place you would see logs is the legacy server - the one you aren't using. We got lucky because of an IP conflict. But monitoring is my safety net. Do you have a net? 10/10
Actually one more: I'm sure somewhere, someone will judge my org for sharing this ("OMG, they suck"). But every org has challenges. Most of them have lessons. We learned some today and I want to share them with you. I hope this helps convince someone they need 24*7 monitoring.
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Jake Williams
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member and get exclusive features!

Premium member ($3.00/month or $30.00/year)

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!