Regarding the Hunter Biden forensic analysis: 1. I personally am not a fan of the fact that it took so long to get an independent analysis of the data. 2. I wish the evidence had been made available, without strings, to reputable media organizations in 2020. 3. It wasn't. 1/n
When political operatives shop evidence of a "bombshell story" weeks before an election, but dictate publication timelines as a condition of providing the evidence, skepticism is fully warranted.
Publishing without validation in that case is journalistic misconduct IMO. 2/
Whether you agree with what Twitter did in suppressing the story is a separate issue from the integrity of the evidence.
Anyone discussing the @nypost stories without acknowledging evidence was manipulated BEFORE being passed to the NY Post is being disingenuous. 3/
As reported, the evidence I analyzed for @washingtonpost was *definitely manipulated* before it got to me. I have no knowledge of what the @FBI has or how it might differ from what I analyzed. 4/ washingtonpost.com/technology/202…
The line between "manipulated" and "fabricated" is razor thin in so many cases, especially cases like these where the stakes are so high.
I legit do not understand people want to claim other news outlets should have run this story without validation. 5/
As I see it, there are many issues being discussed today: 1. Was Twitter's censorship consistent with its own policies? 2. Are Twitter's policies on censoring hacked material stories appropriate? 3. How much validation should be performed by media outlets before publication? 6/
4. Does the magnitude of the story change the validation standard? 5. Does the source? 6. What if you discover evidence has been modified prior to receiving it? 7. Do you have a duty to report that modification? 8. Can others independently validate the evidence? 7/
I'm sure there's nuance I didn't capture here. My opinions about Twitter's policies don't really matter, so I'll simply focus on the evidentiary questions.
We know for a fact the evidence was modified. That's not up for debate. It also wasn't broadly available for validation. 8/
I wouldn't take seriously any media outlet that chose to publish a story of this magnitude without validation. After you discover modification, then what you choose to publish (and how that manipulation is described) is a question of editorial discretion. 9/
Unless you're an editor of a major publication (which I am not), you're unqualified to dictate how publications should report a story like this.
Getting it wrong will cost advertising dollars and possibly jobs (most probably yours), not to mention your duty to the public. /FIN
Your airline pilot started in a single engine Cessna. Nobody called it gatekeeping. And before that, they learned lots of "mostly irrelevant" facts in ground training.
Cyber is one of the only fields where we pretend that skipping the basics is okay to put butts in seats. 1/4
Do you really want an incident responder that doesn't understand the implications of a "non-standard" subnet mask (whatever that actually means, don't get me started)? Sure, it's only like .1% of IR where that's relevant, but just highlighting an example. 2/4
I don't have the answers for "how much knowledge is enough" for a given task. But given the number and cost of ongoing security incidents, I suspect that if we don't answer the question, regulatory (read licensing) boards will answer for us. 3/4
The new #msdt 0-day can be mitigated by removing the protocol handler for ms-msdt (reg delete hkcr\ms-msdt /f).
Disclaimer: I haven't checked for impacts in a large production environment, but seems better than being exploited. MSDT is just a diagnostic tool, so likely safe.
When I say "haven't tested" I mean for second order impacts. I've tested that this is 100% effective as a mitigation.
FYSA. I haven't seen this on my test system yet, but in any case I'm still okay recommending removing the handler until I have another mitigation. Considering there will likely be a patch for this released long before relicensing is required.
Okay, so playing the #msdt 0-day a bit and here's what's happening: 1. The maldoc contains a linked HTML document 2. Word automatically retrieves the linked HTML document, which contains JS to reset the location to an ms-msdt protocol handler, which is present by default 1/
3. The protocol handler launches msdt, which launches a command using the IT_BrowseForFile parameter. The maldoc that triggered this whole event invokes this code (newlines and comments added). The doc was likely distributed with a .rar file. 2/
4. I don't have the ".rar" file, but we can still tell what it's doing. The findstr command is looking for "TVNDRgAAAA" which means it's looking for a base64 encoded string beginning with "MSCF" which is the file header for a .cab file. 5. The expand command unpacks the .cab 3/
I've had a chance, or let's say many catalysts, to think about friendship and what it really means.
True friends:
* Take that call even when it's not convenient
* Don't judge, because let's be real - we've all been there
* Try to make you laugh even when you'd rather not 1/
* Don't view things as transactional (e.g., what am I getting out of this?)
* Just listen if you need to vent
* Tell you the hard truth you honestly don't want to hear, even when they know speaking truth can harm the relationship
* Exemplifies empathy
I could go on, but... 2/
I've certainly failed at many of these points over the years.
Going forward, I'll be trying to do all these things in my friendships. If I've failed you specifically, I'm deeply sorry. If you've been there for me, I'm eternally grateful. 3/
In security, we talk a lot about CIA (confidentiality, integrity, and availability). Most of us also recognize the vast majority of the industry only cares about availability. When I call people on this, they always protest. This morning a great retort for this hit me. 1/
How often does IT refuse to update a security control (e.g. EDR agent) without testing because it might cause a compatibility issue (availability) and break something? Happens ALL THE TIME. "Can't upgrade until we test in every business unit for issues." 2/
But once the software is upgraded, how often do you see teams say "okay, now that we've updated, let's validate it's still catching everything we expect it to?" Almost never. We delay upgrades/updates, often increasing risk of a compromise to maximize availability. 3/
PSA 🧵 A threat actor "preparing for destructive cyberattacks" looks identical to "gaining access for intelligence operations." Like 100% identical. So much so that you *cannot* tell the difference. Be wary of anyone claiming they "know" a destructive attack is being prepared. 1/
Be similarly skeptical of anyone who claims they're sure a destructive attack *isn't* coming in a given situation.
I heard one such argument because "we saw them exfiltrating data, it's intelligence collection." If they're burning the network down, of course they'd exfil. 2/
In fact, it's intuitive to think that bulk intelligence collection could even accelerate before a destructive attack. They won't likely regain access to these networks in the immediate future and there's no incentive to be low and slow at this point in most cases. 3/