Patch ALL teh things we constantly tell CISOs and CIOs.
Thing is, let's be honest with each other right? we can't and this graph is telling.
Patching is a pain, we get it and we do need to revolutionise the approach. Two years ago, @LargeCardinal wrote a phenomenal paper
where, in essence, the idea was to prioritize patches by expressing the connectivity of various vulnerabilities on a network with a QUBO and then solving this with quantum annealing.
but once he's put down his markers and explained it to me like the child I am, it made sense.
A QUBO problem involves finding values for binary variables (i.e., variables that can only be 0 or 1).
You can think of this as 1 if vulnerability is to be patched in the current cycle or 0 otherwise. We cannot patch all, let's be honest with each other here, but what we can do is apply some logic to it, like:
Impact of vulnerabilities: Some vulnerabilities might have a more significant impact if exploited. Take a SSL VPN here, the border gateway device. This is a 'patch y0 shit now, done it yet??'
Dependency factors: Some systems or applications might depend on others, so patching one might reduce the risk or necessity of patching another immediately.
Patching costs and risks: Including the cost or potential disruption caused by patching (e.g., system downtime)
The paper is deep, as you'd expect from Dr Carney, but it has logic to it and the use of vulnerability graphs to help ascertain what to patch, based on the above
and by employing a QUBO approach to the problem of vulnerability patching, an organization can systematically analyze and prioritize patches in a manner that accounts for the complex interdependencies
between system components, leading to a more secure and efficient patch management process.
I didn't say it was easy but the graph showing how badly we are doing it now, kinda tells me we need to rethink our approaches to patching.
Still 60% for 47% of vulnerabilities not patched, this isn't ideal at all.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Strap in, we's going on a ride, a static analysis ride. I recently came across this paper, which looked at a wide variety of SAST tools against a number of Java apps.
Java being the choice of enterprise, and often not the best Java approaches out there, so it's a good choice
First up, what did they use and what did they benchmark it against?
They looked at free tools, tools that specifically supported Java and most importantly, are being actively maintained.
The target was the @owasp project, a good choice imho. They also looked for Java apps with bugs with disclosed CVEs which was around 680 programs.
Bugs happen but it's rare you see a bug that grabs you so hard and makes you nod like a little dog..
CVE-2023-44487 did that for me
good god what a bug and here's why
First up is understanding the key differences between HTTP 1.1 and 2, especially how requests work
HTTP 1.1 is a text-based protocol that uses a single connection for each request/response pair. Every time you request the / from , it will be a diff request NSA.gov
for each element of that page (CSS, images etc)
HTTP 2 is a binary protocol that utilises multiplexing, which allows multiple requests and responses to be sent simultaneously over a single connection
An interesting new feature found in @Apple’s latest privacy and security report is that of Link Tracking Protection and I’ve not stopped thinking about this
First up it’s pretty cool. My views on the pervasive nature of the tracking industry are not something I’ve hidden away: it’s an ugly industry with no real oversight, so any efforts to put a finger in their eye is one to applaud
The approach by Apple is interesting
First up is the deeper inspection (I’m assuming client only) that intercepts any url and does a regex on it to strip out utm and other crap added to the url
If it works like that, I’m impressed. However, how much stuff will it break in the process? I guess time will tell
Here’s the thing right: if you are building any application/binary or indeed something that takes input and uses that to form the basis of further functions/actions, you kinda need to think about robustness.
Imagine a HTTP POST request to /remote/portal/bookmarks
What is needed is Content-Length, which indicates the size of the corresponding body. This is how the web works, so to send and indeed accept a zero byte body is odd and you’d check for that right?
Bueller? Right??
Well it seems not and there’s a brilliant write up of why this was a problem that caused a segfault in a SSL VPN appliance by Aliz Hammond over at @watchtowrcyber
The daily routine used to be monitor checkpoint FWs and add new rules to stop silly attempts at scanning Solaris, adding rules to allow apache to talk to oracle and so on. Then Cisco came out with this box that meant we could use a handful of IPv4 and then rfc1918 in our DC
Holy shit, this means they couldn’t see our database servers anymore! Pete, this changes everything
All was going so damn well until that bloody rain forest puppy releases this paper taking about hurting SQL servers. Wtf is xp_cmdshell and why can you see internal servers??
When the twitter dump came out, I enjoyed having a “theoretical” chat with John about how you “theoretically” would weaponise this. It’s not a new topic per se, we did abuse this in yesteryear but it doesn’t make it any softer a threat.
Because we’ve tied our digital existence to emails and domain names, they become the Crown Jewels. Compromise that and the Tower of London is no longer yours. This is made harder with custom domains and mail servers, as if you give up that domain (I mean it’s not like we collect)