bancfc@openmailbox.org transcribed 0.7K bytes:
Some great developments in lattice-based crypto. DJB just released a paper on NTRU Prime:
- Competitively fast compared to the leading lattice-based cryptosystems
including New Hope.
- Safer implementation of NTRU that avoids vulnerable ring structures and
runs in constant-time.
- The only implemntation that mitigates decryption failures completely,
killing information leaks to adversaries.
- Includes some handy advice for "transitional cryptography" - mixing and
matching classical signature schemes with PQ public-keys.
Hello,
I am similarly excited to see a more comprehensive write up on the NTRU Prime idea from Dan's blog post several years ago on the idea for a subfield-logarithm attack on ideal-lattice-based cryptography. [0] The idea to remove some of the ideal structure from the lattice, while still aiming to keep a similarly high, estimated minimum, post-quantum security strength as Newhope (~2^128 bits post-quantum security and ~2^215 classical for NTRU Prime, versus ~2^206 bits post-quantum and ~2^229 classical for Newhope) and speed efficiencies competitive with NewHope, [1] by altering the original NTRU parameters is very exciting, and I'm looking forward to more research w.r.t. to the ideas of the original blog post (particularly the exploitation of the ideal structure). Additionally, the Toom6 decomposition of the "medium-sized" 768-degree polynomial in NTRU Prime in order to apply Karatsuba is quite elegant. Also, I'm glad to see that my earlier idea [2] to apply a stable sorting network in order to generate valid polynomial coefficients in constant-time is also suggested within the NTRU Prime paper.
However, from the original NTRU Prime blog post, Dan mentioned towards the end: "I don't recommend actually using NTRU Prime unless and until it survives years of serious cryptanalytic attention, including quantitative evaluation of specific parameter choices." Léo Ducas, one of the NewHope authors, has responded to the NTRU Prime paper with a casual cryptanalysis of its security claims, [3] mentioning that "A quick counter-analysis suggests the security of the proposal is overestimated by about 75 bits" bringing NTRU Prime down to ~2^140 classical security.
Current estimates on a hybrid BKZ+sieving attack combined with Dan's subfield-logarithm attack, *if* it proves successful someday (which it's uncertain yet if it will be), would (being quite generous towards the attacker) roughly halve the pre-quantum security bits for n=1024 (since the embedded subfield tricks are probably not viable), bringing NewHope down to 103/114 bits. For the case of the hybrid handshake in this proposal, it still doesn't matter, because the attacker would still also need to break X25519, which still keeps its 2^128 bits of security. (Not to mention that 103-bits post-quantum security is not terrible, considering that the attacker still needs to do 2^103 computations for each and every Tor handshake she wants to break because keys are not reused.)
Please feel free to correct me if I've misunderstood something. IANAC and all that.
Further, there are other problems in the NTRU Prime paper:
1. Defining PFS as "the server erases its key every 60 seconds" seems arbitrary and a bit strange. It also makes the claims hard to analyse in comparison with the NTRU literature (where, as far as I know, it's never stated whether or not keys may be reused, and what downsides might come with that) as well as with NewHope (where it's explicitly stated that keys should not be reused).
2. In Fig. 1.2, the number of bytes sent by a NewHope client is 1792, not 2048. (To be fair, it was 2048 bytes in an earlier version.)
3. The bandwidth estimates for NTRU Prime do not take into account that, due to providing a key-encapsulation mechanism rather than a key exchange, the client must already know the server's long-term public encryption key, in order that the client may encrypt its public key to the server in the first round of the handshake.
Further, and more specifically in the context of Tor handshakes, this requires that Tor either add a second round to the handshake (which we currently have no mechanism, nor proposed mechanism, for), or else that we add the server's NTRU Prime key to the server's (micro)descriptor, increasing descriptor size by 1232 bytes per relay. For the case of adding these keys to microdescriptors, where the average microdescriptor size is currently 416 bytes [4] and ~1 Gb/s is spent on requests to the directories for descriptors, [5] increasing the average microdescriptor size to 1648 bytes would roughly quadruple the amount of network traffic used by directories to respond to fetches (and then double that number again because additional bandwidth is used through the client's guard relay, since most directory fetches are tunneled). This means that any relay providing < ~1.142 Mb/s bandwidth would be using up more network bandwidth to tell everything else in the network about its existence and keys than the relay itself actually provides. This means that 3840 out of the current 7100 become a burden on the network. This might be doable, [6] but I just wanted to point out how much it matters for us to not need to add giant keys to descriptors.
[0]: https://blog.cr.yp.to/20140213-ideal.html [1]: https://cryptojedi.org/papers/newhope-20160328.pdf [2]: https://lists.torproject.org/pipermail/tor-dev/2016-May/010903.html [3]: https://groups.google.com/forum/?_escaped_fragment_=topic/cryptanalytic-algo... [4]: https://collector.torproject.org/recent/relay-descriptors/microdescs/micro/2... [5]: https://metrics.torproject.org/dirbytes.html?start=2016-02-16&end=2016-0... [6]: As an aside, I'd be super happy to kick the crap relays out of the network: https://lists.torproject.org/pipermail/tor-dev/2014-September/007558.html
Best Regards,