For decades, we've been preaching "cybersecurity is not just about the perimeter", yet every time our community is tested, we fall back to "it's just the perimeter". We've been lying this entire time.
The #1 reason ransomware has such a devastating impact is because we put all our security eggs in the Active Directory basket, then the hacker gets Domain Admin, and the game is over.
We can do more to segment network segments and Windows domains so that a compromise one can't spread to the rest. But organizations don't do this. Once inside, hackers have free reign over everything.
With Colonial Pipelines, the important systems affected were the billing systems. But how did hackers reach those? Did it start in the billing systems? or did it start somewhere else and spread to billing?
The easiest way to address ransomware is with a pentest that starts from an average employee's desktop computer. Don't make them penetrate the perimeter, let them start inside and watch how they get Domain Admin privileges.
Ransomware is just a pentest where you negotiate scope and payment afterwards instead of before.
No it wouldn't, because there aren't any good cybersecurity metrics.
"Standardization" implies people are doing it, but in different ways. With cybersecurity metrics, people effectively aren't doing it. What metrics people do track are largely hocus-pocus handwaving.
By the way, my tweet above is easily falsifiable: all you have to do is tell me one statistic that would be meaningful for a government body to track, that would give policymakers or business leaders meaningful insight into the state of cybersecurity.
Or, think of my tweet as a challenge, because if you can show me useful things that Bureau of Cyberstatistics might track, then I'd be an ally strongly promoting the idea.
Ah, memories! I was giving a talk at PasswordCon on "Password Misconceptions" or something similar. A previous speaker was "caught" unlocking their screen before their presentation with a short password. Everyone knows short passwords are weak.
So when it was my turn, I did the same, because I'm a jerk (I quickly edited my talk to add a slide).
The audience saw me connect my laptop to the projector, saw the lock screen appear, and saw me type a short password [******] to unlock my computer to start the presentation.
They laughed at me for my weak, insecure password. How could somebody be talking about password security and have such a weak password on their laptop??
This is normal NYTimes fair: "My provider of anti-science medical quackery called chiropractics holds anti-science medical quackery opinions about vaccines. Is this unethical?"
So that "9-0 pcap" conspiracy-theory video: I grabbed a screenshot of what they claim to be "pcap of encrypted data", OCRed it, and converted the hex back to ASCII. My guess is that it's a hexdump of an SQL dump. It's certainly neither "encrypted" nor a "pcap".
Fields separated by commands implies CSV format, but when those fields are surrounded by quotes, then many fields surrounded by parentheses, it starts to look at a lot like an SQL dump instead.
Bah, immediately after posting this, I see others have already gone down this route:
So I'm reading the CFAA decision. I want to point out yet again that the "mens rea" requirement in the CFAA is bullshit. It doesn't mean the perp knew they were unauthorized, it means a reasonable person in the perp's place would've known they were unauthorized.
I can appreciate that in most crimes, this is the reasonable approach.
It's just that in computer crimes, it's not. People have wildly different understands how computers work, and thus, different understands about what's authorized.
Most people are unintimidated by the URL bar in the browser and have never edited the URL in their lives. Thus, reasonable people assume that if you couldn't have accessed a resource without editing the URL, then it was unauthorized.