Because I’ve obviously gone all “Slurp Juice” on this MEGA attack thing, an attempt at a decoder ring.
First: the client doesn’t trust the server. That’s the whole point of the design. The attacker is Mega, the target is the client.
The client wants to store encrypted files on the server.
The client generates a mess of keys. An RSA key (we’ll spend a lot of time with this). A Curve25519 key. A key for every file they store. And then a master key, one key to rule them all,
The client is going to forget all of these keys. Mega wants you to be able to install a client somewhere else and log in and get all your files, and you can’t be schlepping the keys around on like a USB fob or something. So instead: as the client generates these keys, it encrypts them under
k_m and uploads them to the server (along with the associated public keys, not encrypted, fine). The server doesn’t have
k_m and so can’t decrypt them.
What the client remembers from machine to machine is their password. From the password they’ll derive everything else they need.
PBKDF2: A hashy algorithm that takes a password (a “low entropy secret”) and spits out a crypto key (a “high entropy secret” suitable for plugging into AES or whatever).
RSA-CRT: RSA where you precompute some of the values to make it go faster, notably
qInv, which is 1/q mod p — p and q are the factors of your RSA modulus. You can sort of glaze over this.
ECB: Block ciphers encrypt 16 byte chunks, not more not less. If you want to encrypt an arbitrary string, you need a “mode” that ties together multiple block cipher invocations. ECB is the default mode, and the dumbest mode: just feed every 16 bytes of the plaintext through the cipher and cat together the resulting ciphertexts.
ECB is notoriously bad. You can see penguins through it.
Authenticated Ciphers: By itself, a cipher provides confidentiality: you can’t recover the plaintext from the ciphertext without a key. It doesn’t provide integrity: an attacker can flip bits in the ciphertext, and you’ll decrypt something different from the original plaintext (usually: with big random blocks of garbage).
Mega doesn’t authenticate the encrypted keys its client sends to the server; the server can rewrite chunks of those keys, and the client can’t easily tell that the server did that (the client forgot the keys, remember? The server is helping it remember).
MAC: The hashy value that an authenticated cipher produces that you tack to the end of your ciphertext so clients can make sure that the value they’re decrypting is the original encrypted one, not some random thing an attacker rewrote into the ciphertext. Mega doesn’t use ‘em for encrypted keys.
Oracles: You’re Alice, I’m Mallory. You hold Alice’s secret. I send you some value, and you use the secret to do some cryptography, wee! But you have a bug: something about the value Mallory sent you means you behave differently depending on what your secret is, and Mallory can watch carefully and notice your behavior changing. Mallory does this over and over again, using unauthenticated ciphertexts (see above) with successively better guesses. Mallory is conducting an “adaptive chosen ciphertext attack”; it’s adaptive because Mallory can see stepwise behavior from Alice and refine their guesses; it’s “chosen ciphertext” because Mallory controls the ciphertext values.
Padding: Your crypto algorithm needs, say, 16 bytes of plaintext input. You have 6 bytes. Padding is the scheme you come up with to fill the other 10 bytes. With RSA, you have 40 bytes; your RSA key needs 216 more to make 256. Padding fills those bytes. Padding schemes are standardized.
Bleichenbacher’s RSA Padding Attack: AKA BB’98. An oracle based on RSA padding mistakes: specifically, you ask someone to decrypt some ciphertext with an RSA key they have, but that ciphertext has bad padding. How the target handles that bad padding tells you something about the plaintext behind the ciphertext. Specifically: the attacker multiplies some value into the ciphertext (you can do that with RSA). The resulting ciphertext either does or doesn’t have valid padding, and the server either freaks out, or doesn’t freak out. If it doesn’t, you infer valid padding, and can further infer that the message is in some range of possible messages based on what you multiplied into the ciphertext. Repeat to gain more info about the message until you’ve recovered it.
This is the most popular attack against RSA. It happens because the most common standard padding mechanism for RSA (PKCS1v1.5) is super shitty and nobody can seem to get anyone to use anything else.
The Padding Mega Uses: Just a bunch of zeroes. 😱
The Encryption Mode Mega Uses: Unauthenticated ECB. 😱
See penguins through their encrypted keys. (nbd)
Take an encrypted key and “randomize” a 16-bit block of it by flipping a bit (that block will decrypt to garbage)
Take a block and duplicate it to get that block to decrypt to itself twice (not that scary here but, like, a think you obviously shouldn’t be able to do!)
Remove a block to remove the associated 16 bytes of plaintext (same)
So then, the basic protocol works like this:
You’re the client, you want to log in. You don’t have any of your keys, you just remember your password.
You PBKDF2 the password to derive an identifier you send to the server so it knows you’re you. It looks you up with that identifier.
The server sends you back (
encrypted-rsa-key) (not precisely, but close enough I think?).
encrypted-rsa-key is something you, the client, originally sent to Mega. But Mega controls it; it can replace all or part of it with any bytes it wants.
encrypted-rsa-key, using unauthenticated AES-ECB, with a key you derive from PBKDF2 of your password. You’ve now recovered your RSA key that you previously stored with mega.
encrypted-sid, the encrypted session ID, encrypted under zero-padded RSA with the key Mega just had you decrypt.
You recover the ~40 byte SID from the 256-byte RSA plaintext. You ignore all the other bytes. You send the SID to the server.
The server’s like, yep, that’s the right SID. You’re now logged in I guess.
encrypted-rsa-key from step 3 is a length-delimited 4-tuple (
qInv), just basic RSA stuff. The client doesn’t check any of it; it just makes sure it gets 4 length-delimited values from the decrypted blob and assumes it’s a good RSA key and does RSA math with it.
But the server chooses the encrypted RSA key ciphertext. The server also chooses the encrypted SID value.
If the server is Backendal, Haller, and Paterson (BHP), the authors of this paper, the encrypted RSA key isn’t the one you sent. They flip bits in approximately the spot in the ciphertext corresponding to
qInv, partially randomizing it. You trust it anyways, because it’s not authenticated. You do RSA math with this busted-ass CRT coefficient, and then you send a chunk of the result (which you don’t notice is fucky, because you aren’t checking the RSA padding, which is just all zeroes anyways) back to BHP for them to refine their guess.
BHP controls the SID along with the RSA key. The SID encodes a guess as to
q, one of the RSA factors of the key. The busted-ass RSA key, plus the carefully chosen SID, set up a condition where the client will post back zero if the guess is less than
q, and nonzero if greater.
Binary search, recover
q recover the whole RSA key.
It gets dumber!
Recall from the beginning that we generate a whole bunch of keys, not just one. One for each file. A Curve25519 key. They’re all encrypted under the same master key as the RSA key.
We can repeat the attack I just outlined, but instead of just randomizing
qInv, because of ECB, we can swap in chunks of those other keys. Then we can run a similar oracle attack to the one we used to recover the RSA key to recover the plaintext of those chunks. We can do this with all the keys the user has uploaded, if we want.
Hopefully that makes more sense than my “Slurp Juice” tweets. I’m not trying to cover the attacks accurately or completely (I don’t fully grok them yet), just trying to clarify some jargon I was spouting.