I realized today that I had never talked publicly about something really important about the design of access control systems: design their semantics to be reverse-indexable.
This is a much spicier take than it sounds like, but there's a good reason. 🧵 [1/]
Right now, access control systems are built so you can show up and say "I want access to object X", the system looks up the access control rules for object X, and then figures out whether you should have access. [2/]
With the exception of a few corner cases, the semantics of access-control system you build should be able to be turned upside down. For this you want a reverse index (which wikipedia calls an "inverted index").
With a reverse/inverted index, you don't have to look up the access control rules according to the name of the object, you can look up what someone has access to. This is freaking magic because you can answer "what does Lea have access to?" [4/]
If you don't have reverse-indexability, then it's very hard to tell someone who's trying to, say, add someone to a group what's going to actuallyhappen when they do. (They should know!) It may have unexpected results, removing or adding people from access to various data. [5/]
It's also tricky to build a dashboard that tracks things like how many people have access to certain data. You may need to get the numbers by brute force, asking for every person and relevant piece of data "does X have access to Y".
More reasons, but ... Tweet thread... [6/]
Now for why this is a spicy take: many of the grammars that people use to do access control do not have this property. In particular, if you're using a policy language, it's *very* unlikely. If you're using ALLOW X/DENY Y semantics (like firewall rules), you don't. [7/]
You might note that Zanzibar, Google's big authorization system, has a grammar which is reverse-indexable. This isn't an accident. (I designed the grammar around several different goals, including reverse-indexing.) [8/]
A rant on tokenization:
Tokenization is replacing particular data with an opaque set of bits, called a “token”.
The token either is encrypted or a mapping stored in a table. Tokens are usually a fixed # of bits (usually 64) for simplicity.
They are also surprisingly dangerous...
I love tokenization for cases like credit card numbers, where a small opaque piece of data is quite sensitive and generally has reasonable usage patterns. But people try to use tokenization without security or scalability.
Don’t do this. Let me explain...
Why use tokens? (If you can make them secure and scalable enough)
* Because you don’t need to worry so much about every single system which touches tokens instead of sensitive data. That's great, because I've already got enough things to worry about.
What's a good way to set the edge between a security (or privacy) engineering team and the rest of engineering?
(Was asked this question this morning and thought the way I think about the answer might be helpful to other folks.)
One simple trick: look at your on-call rotations
There are a lot of places where systems are security/privacy-critical. A *lot*. Not all of those should go in the security/privacy team.
I'm Captain Pragmatic, teams should sit where they're productive and happy, but this is where I'd tend to put those teams.
1. The systems where you can't build or run them without doing security/privacy deeply on an ongoing basis. Think authentication, authorization, insider threat detection systems for security. Think central data deletion infra for privacy.
A buddy who's interested in end-to-end encryption (E2EE) but hasn't done one of these projects in the very messy place which is the real world happened to ask me this morning about pitfalls which might not be obvious. So here's a partial list in the hopes that it's helpful. 🧵
For context: I have a PhD in cryptography, my thesis is on privacy-preserving cryptographic protocols, and I'm publicly known to have worked on several novel E2EE systems (from Zoom and Google).
So: 1) YMMV because every system is a bit different 2) this is not my first rodeo
1. People lose their keys. Most obvious, always important. Their phones break, they're lost, etc. and all the keys which were on them go away. Also people forget passwords.
People get grumpy when they lose their data. If you can design your product so they're not, it's easier.
I want to be right, so I keep looking for how I could be wrong.
I ask my coworkers what worries them, how I'm wrong, what I'm missing. I repeat and repeat that I want the bad news, because I can't help fix problems I don't know exist.
Everyone has their own style, but this really helps me solve problems, fix things, and keep them fixed.
Plus I get fewer surprises. Security and privacy people hate surprises.
I've been getting questions in here so I'm going to start answering 'em in this thread in the hope that the answers are helpful to other people, too. And if you have a different answer, go ahead and post it! Different things work in different situations.
Hey folks! If you don’t know me, I’m the CISO of @Twitter – I run the information security, privacy engineering, and IT teams.
We’ve got a bunch of roles open across infosec, privacy eng + legal, and IT. Come help Twitter build great things which respect our users! 🧵
I’d love to have the chance to work with you. We have roles from relatively junior up to Director. Links in this thread; there are likely some more coming.
Managers are tagged in this thread, so you can ask any of us questions or say hi. They're good folks.
All of these jobs are remote-friendly, with a few caveats: (1) your working day needs to overlap heavily with the folks you’re working with (for most roles Americas time zones) (2) we need to be able to legally hire you where you want to work.
I mentioned the Bad News Hat at #enigma2022 and promised to tell the story when I had a few minutes.
This is the hat I pull out when I have to tell people something they won't like. I do it because earlier in my career a group of people literally cringed when they saw me. 🧵
Back in the day, I worked with a particular team who had what I called "incident season" which came right after... well, as far as I could tell, "bad decision season". They weren't all bad, but under pressure to launch this team launched some things which weren't solid. /2
I had to walk over and tell that team they had an incident which they needed to drop everything and fix so many times that they started literally flinching when they saw me, even if I wasn't coming to tell them anything bad!