Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you can copy the contents of the NAND memory to one chip for testing, then you can copy it to a hundred chips and parallelize the process, assuming I haven't misunderstood the hardware issues at hand (not my specialty, to be fair).

The exponential backoff of attempts is not really an issue in that case.



Interestingly, this is the EXACT advice [1] given at the Apple + FBI @ U.S. Congressional Hearing on March 1st from California congressman Darrell Issa (R).

Congressman Issa was previously the CEO of DEI, a car security and audio equipment company. He is possibly one of the most tech-savvy members of U.S. Congress and happens to be one of the wealthiest as well [2].

The Congressional hearing video footage is here, the suggestion was proposed at 1h23m 13s in.[3]

[1]: http://qz.com/628745/i-have-no-idea-the-fbi-director-at-the-...

[2]: https://en.wikipedia.org/wiki/Darrell_Issa

[3]: https://youtu.be/g1GgnbN9oNw?t=4993


I noticed the same thing. I wonder whether James Comey's aw-shucks answer (roughly, "I have no idea, I'm just a regular Joe, I don't know much about the tech here") was feigned.


I watched the whole ~4 hours and somehow missed Issa intervention. Thank you.


You can't parallelise it like that, because the passcode check relies on an unreadable AES key (the UID) that is unique to that particular iPhone.


Are you sure it's unreadable? I'd be so surprised if they couldn't sniff that out.

If it is, how do they do that? I can't imagine it's somehow embedded in circuitry (too complicated to mass produce) so it must be on some kind of storage medium, right? What makes that unreadable?


https://www.apple.com/business/docs/iOS_Security_Guide.pdf

> Every iOS device has a dedicated AES 256 crypto engine built into the DMA path between the flash storage and main system memory, making file encryption highly efficient.

> The device’s unique ID (UID) and a device group ID (GID) are AES 256-bit keys fused (UID) or compiled (GID) into the application processor and Secure Enclave during manufacturing. No software or firmware can read them directly; they can see only the results of encryption or decryption operations performed by dedicated AES engines implemented in silicon using the UID or GID as a key.

> Additionally, the Secure Enclave’s UID and GID can only be used by the AES engine dedicated to the Secure Enclave. The UIDs are unique to each device and are not recorded by Apple or any of its suppliers.


I'm fairly sure that only applies to the iPhone 5, and the FBI is interested in an iPhone 4, which doesn't have the Secure Enclave.


No, as Apple states, every iOS device has a hardware key.

> Every iOS device has a dedicated AES 256 crypto engine built into the DMA path between the flash storage and main system memory, making file encryption highly efficient.

In newer phones it is in the Secure Enclave instead of the CPU (the SE handles all encryption/decryption for the CPU).


There's no known API to read the UID out (there is one to use it, or the GID, or any other AES key: http://iphonedevwiki.net/index.php/IOCryptoAcceleratorFamily...), no published side-channel attacks, and the destructive/invasive techniques that would involve decapping the chip and using an electron microscope are not solid enough to use for this, such that there would a huge risk of destroying evidence.


Embedding it in the circuitry isn't that complicated - remember, it's only 256 bits you have to embed, which for example could be stored in 256 fusible links.


If it's in fusible links, which IMO is very likely, wouldn't that mean you could read it back the same way you wrote it?

Maybe another HW key is required to do so.


> wouldn't that mean you could read it back the same way you wrote it

Not if there's a write-permit fuse as well =)


Good point but isn't blowing a fuse detectable with x-rays?


It's in the chip.


I guess it would depend, like brk mentioned, on whether or not the passcode is stored in NAND and only validated against user input. I'm out of my depth here, so I'll gladly defer to the experts.


The passcode is run through a key generation function, PBKDF2, to turn it into an AES256 key, which is then used to unwrap filesystem keys that protect any files marked as only available "when unlocked" or "after first unlock". The PBKDF2 process involves a special UID key which is unique to the device and inaccessible to software. (Software can only perform operations with the key.)

If you're not familiar with PBKDF2, it is similar in function to bcrypt or scrypt - it turns a password into a key and is designed to take a long time to prevent brute force attacks. Tying in the UID key prevents the attacker from brute forcing on a faster machine (or machines).

The wrapped keys I mentioned are stored in what apple calls "effaceable storage", specially designated non-volatile memory that actually erases rather than just being marked as free. I have no idea if it's stored on the NAND chip no the iPhone 5c or not. (Apparently there was a previous attack that involved making the chip read-only, so Apple may have moved the effaceable storage to mitigate it.)

If you're interested in details, this is a good read, lots of interesting ideas in there:

    https://www.apple.com/business/docs/iOS_Security_Guide.pdf


This was my thought too after reading this article.

Part of this relies on the specific iPhone 5c from the shooter, because of the per-device hardware key. They ultimately need to unlock that specific phone, with the NAND data intact, in order to read the contents.

But, if the passcode is stored in NAND and validated only against user input they could duplicate the NAND and parallelize the process. If any part of the user code check involves the hardware key, then it wouldn't work.


In the article it is specifically mentioned that multiple keys are used for the encryption. There is a secret key burned into the A6 processor that is hard to access without the risk of destroying it. This key cannot be destroyed programmatically, so a secondary key on the NAND is destroyed in case of too many attempts. Only that process could be circumvented with this technique, it doesn't address the increasing interval and AFAIK it is not possible to multiply the hacking process without somehow multiplying the A6 chip - i.e. very hard.


Or you could just extract the UUID (though others are claiming that's not possible; I'm skeptical but don't know) and throw the iPhone in the garbage as they use a distributed supercomputing cluster to decrypt it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: