O.k., this is happening ... now. I'm going to explain what's actually going on when data is encrypted, hopefully without mystifying, oh and some of the weird and inconsistent stuff cryptographers come up with.

O.k. so symmetric encryption - that's what we most commonly use when we want to encrypt gobs of data, large and small. Your browser sends and receives your data with symmetric encryption. If your files or disk are encrypted ... that's symmetric encryption.
iMessage, Signal, WhatsApp - they all use symmetric encryption to actually encrypt your messages. So at it's most basic, when you think of encryption as take chunks of data and "scrambling" them so that nobody who doesn't have a key can understand them, that's this.
Simple example, let's say I have the string "Ovaltine" and I want to encrypt it. I could use rot13, a very simple old-school Caesar cipher which just basically makes a circle out of the alphabet (so a meets z) and replaces each letter with the letter that is 13 letters away ...
So "O" becomes "B", and "v" becomes "i" and "Ovaltine" becomes "Binygvar". Of course this isn't very secure, it's a silly example, it's very easy to break this kind of encryption because an attacker can check which letter is most common (which is usually e) and work it all out.
Now, you might imagine that there must be more sophisticated ways to "scramble" the letters. Like have some kind of complicated scheme where a maps to 'p', but then the next time it maps to 'f', or maybe even 'a' maps to two letters sometimes, like a -> 'jd' or something.
So like with a complicated scheme like that 'Ovaltine' could become say "FGyswDmweeRq" (notice it's longer). Well, there are encryption algorithms throughout history that work like this, but it's not how modern encryption works at all.
Instead of "scrambling" letters, modern encryption works by taking data and combining it with seemingly-random looking data in a clever way. It's similar to rot13 in two important ways: encryption and decryption are actually the same operation, and everything is in-place.
Actually did you notice that rot13 is both an encryption and decryption algorithm? rot13(Ovaltine) -> Binygvar , rot13(Binygvar) -> Ovaltine. I like to think that's one of the nice symmetries of symmetric encryption. Anyway, back to that clever way I was talking about ...
The cleverness we use is a bitwise XOR operation. There are inconsistent notations for XOR in cryptography and formal logic and code, but I'll use the one you're probably familiar with. It looks like a hat: ^
XOR is short for "exclusive OR" and it's an operator (or function if you want to think of it like that) that takes two arguments and returns one result. a ^ b = c. It's bitwise, so it operates on each corresponding bit.
If a and b are bytes, you can think of a ^ b = c as really 8 different operations all happening at once. ^ compares the first bit of a, and the first bit of b, and puts the result in the first bit of c. And it does the same 7 more times, for the other bits.
The rules are simple: if a bit from a is "1" OR if a bit from b is "1" then we set the same bit to C to 1 ... but only if they *both* weren't 1. That's the exclusive part. Here's an old school truth table:
A | B | C
0 | 0 | 0
1 | 0 | 1
0 | 1 | 1
1 | 1 | 1
Now the cool thing about xor is that it's like rot13. We can use to encrypt and decrypt things. Going to use some really simple examples here. Let's say that we want to encrypt just the number "3" and that our key is another number "7".
So 3 ^ 7 = 4. So the encrypted version is "4". Now to decrypt, I just do the same again: 4 ^ 7 = 3. Try any numbers you like, or any data, this will always work: XOR always reverses itself.
Byte by byte, that's how we're actually encrypting and decrypting things, there's no scrambling or moving going on ... but XOR-ing. The hard part is coming up with the data to xor with.
One approach is to have a big chunk of secret data lying around and use that to XOR. As long everyone who needs to send or read the original plaintext has the same secret data ... this should work. Couple of problems with this:
Problem 1: The secret data needs to be seemingly-random. You can't use the text from book or something. Any patterns in the secret data will show up in the encrypted version, and that's literally part how the allies beat the axis powers in WWII.
Problem 2: You can't ever re-use the secret data. Again, patterns will show up. So you have to somehow securely get big wedges of secret random data (One time pads) around to everyone who needs them. Too hard.
So in modern encryption, we *generate* the secret data we need from a small key, and those keys are much easier to get around and protect. This is what symmetric encryption algorithms really are: schemes for deterministically generating random data from a key.
That "deterministic" part really matters: two people with the same key have to generate the *exact* same data, or else they won't be able to understand one another.
You've probably heard of lots of these algorithms: AES, 3DES, DES, RC4, ChaCha20. All of these algorithms do this. It turns out that the math problem of taking a key and generating a random stream of data, one that has no patterns and is not predictable in any way, is hard.
From that list, only AES and ChaCha20 are considered secure today, they others have been broken in interesting ways ... people figured out how to predict them. AES itself has a bit of patchy reputation, because ...
Cryptographers: AES is the premier and most-analyzed encryption algorithm. Absolute gold standard! 🕶️

Also Cryptographers: AES implementations in software (not hardware) are either insecure, or slow, or both. It wasn't designed with caching side-channels in mind. 🤦‍♂️
Don't worry too much if you didn't follow that. Just take this away: AES is mathematically awesome but hard to code, it's a good thing we almost always have hardware support to do it instead.
... anyway ... moving on ... how do these encryption algorithms actually work? How do we take a key and securely generate a random stream of data? I'm going to simplify things a little here and talk about blocks.
These algorithms take three inputs, and produce encrypted text. The inputs are the key, the plaintext ... and - surprise - something strange called an initialization vector. So for example: AES(key, IV, plaintext) -> encrypted_data
The key and IV are combined to create a set of "starting conditions" for the algorithm; it's like an initial permutation, or scrambling, of some scrabble tiles. The same key and the IV will always produce these same starting conditions? Why do we have an IV, you ask? ...
We have an IV so that we can encrypt multiple messages using the same key. Without an IV, each generated stream would always be the same, and that's no good. One of rules is that we can't re-use ever. So we need the IV to mix things up. But unlike the key, the IV can be public.
So when you encrypt a message and send it to someone, you can send add "hey, here's the IV I used". Now it's still critical that we don't re-use the combination of key and IV, because those would generate the same random data.
Two common ways to do this: 1) the IV is some kind of counter: we just increment it for every message. 2) the IV is generated randomly, and it's a big enough value that we don't worry too much about collisions. Anyway, I said I'd talk about blocks.
So the key and IV are "mixed" or combined in some way to create starting conditions ... these conditions are really an initial "block" of random data. For AES128, the block is 128 bits, for AES256 it's ... 256, for ChaCha20 it's 512-bits.
Now the magic and personality of the encryption algorithm really comes in. It's really all about how we generate one block after another, and each block is seemingly-random with predictable relationship to what came before or after, to someone who doesn't have the key.
I'm not going to go too deep into how these work, but if you want to get a sense yourself: start by looking up linear congruential generators, which is a really simple function that can make a block of data "cycle" in a random non-repeating way.
Then look up Feistel networks, which are sort of the next level of that. Look up S-Boxes then if you're more curious, and finally, take a look at how Salsa20 does the rotations that ChaCha20 uses. It's all more approachable than you might think!
O.k. so now we know how random streams of data can be combined with plaintext to do encryption and decryption, and we sort of know how random streams of data are produced. Isn't that it?
For full disk encryption, it almost is really. We basically just encrypt every block or sector of storage, under the same key, and using an IV that's derived from the "position" on disk. So we can always just decrypt any block anywhere on the disk, as long as we have the key.
But there's a problem with this ... someone can mess with the encrypted data. If I change the value of any byte, even if I don't have the key, it will mess with the decrypted output. There's no real security against that kind of meddling.
For sending messages and data over the network, that's not going to got it. We don't want people tampering with our information. So we need to add an integrity check! There's a few schemes for this.
HMAC, GCM and Poly1305 are the common ones in use right now. In each case, these algorithms basically take the data as input, along with another key (an integrity key) and produce a MAC or a tag, which is just another piece of data that acts as a signature.
So to encrypt, and protect, our string, one scheme might be:

AES(key, IV, "Ovaltine") -> encrypted_output
HMAC(key, encrypted_output) -> MAC

and then on the wire, we send:

IV | encrypted_output | MAC
To decrypt, we check the MAC first by generating it again and making sure they are the same, and then we decrypt the output. Internally there are differences between how HMAC, GCM and Poly1305 generate these signatures, but you don't need to worry about that.
Today, this combination of operations is wrapped up in function we call "AEAD" which means Authenticated Encryption with Additional Data, and it does all of this is a mostly-foolproof way for you. Basically:

The "additional data" is just any other data you might want to "prove" the sender has, but not send; like say some meta-data that establishes a permission. It's often left empty.
Now you can still screw up with AEAD. If you re-use the same IV, that's bad!! There are attempts to make this better, my colleague Shay has been working on a cool scheme called SIV, and it adds a measure of protection against that too.
If you do use unique IVs, modern encryption is really robust. In general, you could publish some encrypted text in the New York Times, and no-one will be able to crack it. This Is true even if /some/ of the text is known. For example ...
In internet protocols a lot of the text is known, a HTTP server always responds the same way and the first few bytes are known and totally guessable. This doesn't matter at all - doesn't help an attacker figure anything else out even one bit. We've come a long way from WWII.
But there are attacks that do work! If you're sending this data over a network, and someone can see the timing and size of message. This opens us up to traffic analysis.
Let's look at length first. O.k. so the length is obviously not hidden. That's fine if you're trying to protect your password or credit card number in the middle of a response. No big deal. But it does mean that someone might be able to fingerprint the content you're sending.
Simple example: if you send a gif over a messaging app, if the size of that gif is unique, someone in the middle can probably guess what gif you just sent. There are more sophisticated versions of this for Google Maps, Netflix, WikiPedia, and so on.
The way we protect against this is to "pad" messages, to make large numbers of messages appear to be the same size no matter what. Military grade network encryption actually pads all traffic all the time, so it's always the same!
Another problem with length is that if you're using compression, and let attackers control any of the content on a page that a user sees, that can let the attackers figure out even small secrets. Look up the "CRIME" attacks. It's awesome, and scary.
I said the other problem is timing. Obviously the timing of each message is public, but is that a big deal? It can be! For example, if you send a message for every use keystroke, it's trivial to figure out what they're typing through timing analysis. WOW.
Another example is VOIP. If your call app only sends data when people are speaking, but not during the silences, that's still enough to guess about 70% of English-language speech. Just from the silences! Scary cool.
These examples underling: even when you use encryption algorithms and schemes we've been perfecting for about 80 years, there's still some gaps you can walk into and break the security. Which is why this stuff is worth knowing!
Anyway, that's the level I'm going to stick at for now, but we've covered a lot of ground. If you've finished this thread, thank you! But also you should now have some kind of better understanding of what's going on, and what to be wary of. Feel free to AMA.
Oh the truth table for XOR is wrong. I guess it's more of a lies table. Should be:

a | b | c
0 | 0 | 0
1 | 0 | 1
0 | 1 | 1
1 | 1 | 0
Missing some Tweet in this thread?
You can try to force a refresh.

# Like this thread? Get email updates or save it to PDF!

###### Subscribe to Colm MacCárthaigh

Get real-time email alerts when new unrolls are available from this author!

###### This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

# Try unrolling a thread yourself!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" `@threadreaderapp unroll`