Hi list,
This is a proposal to use quantum-safe hybrid handshake for Tor communications. Given NSA's recent announcement on moving towards quantum-safe cryptography, it would be nice to have a quantum-safe feature for Tor.
The idea of the quantum-safe hybrid handshake is to combine both classical key exchange and a key encapsulation mechanism (KEM) instantiated by a quantum safe encryption algorithm, so that the combination gives both (classical) authentication and quantum safety. In a bit more details, the client and the server agrees on a classic pre-master secret, $c$, using the ntor protocol. In parallel, client generates a public/private key pair of the quantum-safe encryption algorithm, and send the public key to the server. The server picks a random string, $q$, encrypts it with the public key and send the ciphertext back to the client. The final secret is the output of KDF(c|q).
This proposal defeats the harvest-then-decrypt attack with a minimum impact to the existing ntor protocol. An adversary needs to be able to break the quantum-safe encryption algorithm to learn q. On the other hand, if the quantum-safe encryption algorithm turns out to be not secure, the protocol is still as secure as ntor protocol. In other words, it will at least do no harm to the current security.
In addition, this is a modular design that allows us to use any quantum-safe cryptographic primitives. As a proof of concept, we instantiated the protocol with NTRUEncrypt lattice-based crypto. We implemented the the protocol with NTRU parameters that gives 128 bits security. The code is available at https://github.com/NTRUOpenSourceProject/ntru-tor
Please find the attachment for the request to change the feature. The proof of the protocol can be found at: https://eprint.iacr.org/2015/287.pdf
Some known issue: 1. cell size. As far as we know, all quantum-safe encryption algorithms have large key and/or ciphertext size that exceeds the cell size ~500. So this protocol needs to transmit multiple cells, no matter which quantum-safe encryption algorithm we chose. This is addressed by "Proposal 249: Allow CREATE cells with >505 bytes of handshake data".
2. quantum-safe authentication: there is no quantum-safe authentication in this protocol. We believe that authentication can wait, as future (quantum) adversary cannot come back to present time and break authentication. Hence, we use ntor authentication to keep the proposal compact and simple. It will be a future work after this proposal.
Thanks for your time, and happy holidays!
Zhenfei Zhang Security Innovation.
On Mon, Dec 28, 2015 at 5:34 PM, Zhenfei Zhang zzhang@securityinnovation.com wrote:
Hi list,
This is a proposal to use quantum-safe hybrid handshake for Tor communications. Given NSA's recent announcement on moving towards quantum-safe cryptography, it would be nice to have a quantum-safe feature for Tor.
The idea of the quantum-safe hybrid handshake is to combine both classical key exchange and a key encapsulation mechanism (KEM) instantiated by a quantum safe encryption algorithm, so that the combination gives both (classical) authentication and quantum safety. In a bit more details, the client and the server agrees on a classic pre-master secret, $c$, using the ntor protocol. In parallel, client generates a public/private key pair of the quantum-safe encryption algorithm, and send the public key to the server. The server picks a random string, $q$, encrypts it with the public key and send the ciphertext back to the client. The final secret is the output of KDF(c|q).
This proposal defeats the harvest-then-decrypt attack with a minimum impact to the existing ntor protocol. An adversary needs to be able to break the quantum-safe encryption algorithm to learn q. On the other hand, if the quantum-safe encryption algorithm turns out to be not secure, the protocol is still as secure as ntor protocol. In other words, it will at least do no harm to the current security.
In addition, this is a modular design that allows us to use any quantum-safe cryptographic primitives. As a proof of concept, we instantiated the protocol with NTRUEncrypt lattice-based crypto. We implemented the the protocol with NTRU parameters that gives 128 bits security. The code is available at https://github.com/NTRUOpenSourceProject/ntru-tor
Please find the attachment for the request to change the feature. The proof of the protocol can be found at: https://eprint.iacr.org/2015/287.pdf
Some known issue:
- cell size. As far as we know, all quantum-safe encryption algorithms have
large key and/or ciphertext size that exceeds the cell size ~500. So this protocol needs to transmit multiple cells, no matter which quantum-safe encryption algorithm we chose. This is addressed by "Proposal 249: Allow CREATE cells with >505 bytes of handshake data".
- quantum-safe authentication: there is no quantum-safe authentication in
this protocol. We believe that authentication can wait, as future (quantum) adversary cannot come back to present time and break authentication. Hence, we use ntor authentication to keep the proposal compact and simple. It will be a future work after this proposal.
Thanks for your time, and happy holidays!
Thank you! This is now proposal 263.
peace,
Zhenfei Zhang transcribed 22K bytes:
Hi list,
This is a proposal to use quantum-safe hybrid handshake for Tor communications. Given NSA's recent announcement on moving towards quantum-safe cryptography, it would be nice to have a quantum-safe feature for Tor.
Hello Zhenfei,
We're also very interested in having post-quantum security for Tor, and it's super exciting and helpful to have researchers helping us come up with a designs.
The idea of the quantum-safe hybrid handshake is to combine both classical key exchange and a key encapsulation mechanism (KEM) instantiated by a quantum safe encryption algorithm, so that the combination gives both (classical) authentication and quantum safety. In a bit more details, the client and the server agrees on a classic pre-master secret, $c$, using the ntor protocol. In parallel, client generates a public/private key pair of the quantum-safe encryption algorithm, and send the public key to the server. The server picks a random string, $q$, encrypts it with the public key and send the ciphertext back to the client. The final secret is the output of KDF(c|q).
Assuming an adversary who possesses a quantum computer capable of performing Shor's algorithm to break ECDLP in some reasonable amount of time, sending client's "quantum-safe" key through the ntor channel still doesn't provide post-quantum security, since this portion of the KEX isn't authenticated with a PQ-secure scheme. (Upon reading further, I realised this is mentioned in the last paragraph of your proposal.)
I feel like there needs to be some new terminology here. It's certainly not post-quantum secure, but "quantum-safe" doesn't seem right either, because it's exactly the point at which the adversary gains appropriate quantum computational capabilities that it become *unsafe*. If I may, I suggest calling it "pre-quantum secure". :)
This proposal defeats the harvest-then-decrypt attack with a minimum impact to the existing ntor protocol. An adversary needs to be able to break the quantum-safe encryption algorithm to learn q. On the other hand, if the quantum-safe encryption algorithm turns out to be not secure, the protocol is still as secure as ntor protocol. In other words, it will at least do no harm to the current security.
In addition, this is a modular design that allows us to use any quantum-safe cryptographic primitives. As a proof of concept, we instantiated the protocol with NTRUEncrypt lattice-based crypto. We implemented the the protocol with NTRU parameters that gives 128 bits security. The code is available at https://github.com/NTRUOpenSourceProject/ntru-tor
Thanks! This is great! Having an implementation to go along with the proposal makes it easier to evaluate. I've already actually looked at your code a couple months ago, but I'll take a second look after the new year and see what (if anything) changed.
However, if we were to go the route of using NTRU, we'd likely want to instead use Dan Bernstein's NTRU Prime parameters, in order to eliminate some of the inherent algebraic structure of the ideal lattice which might possibly be exploited. [0] [1]
Also, what is the current state of patents on NTRU? My understanding is that NTRU is dual-licenced as GPLv2+ and commercial, [2] however, Tor is currently BSD licenced. Would it be necessary to relicense Tor as GPLv2+? Will the GPL exceptions continue to be applied to further patents on optimisations and improvements/protections for NTRU?
Please find the attachment for the request to change the feature. The proof of the protocol can be found at: https://eprint.iacr.org/2015/287.pdf
Some known issue:
- cell size. As far as we know, all quantum-safe encryption algorithms have large key and/or ciphertext size that exceeds the cell size ~500. So this protocol needs to transmit multiple cells, no matter which quantum-safe encryption algorithm we chose. This is addressed by "Proposal 249: Allow CREATE cells with >505 bytes of handshake data".
Nick created proposal #249 [2] to address this issue. (It'd be great to have additional review on the proposal to ensure that it does meet the needs of designers of candidate post-quantum schemes!)
- quantum-safe authentication: there is no quantum-safe authentication in this protocol. We believe that authentication can wait, as future (quantum) adversary cannot come back to present time and break authentication. Hence, we use ntor authentication to keep the proposal compact and simple. It will be a future work after this proposal.
Not to delay progress on making Tor post-quantum secure, but if we tackled both encryption and authentication in parallel, we'd better able to compare the advantages and disadvantages of schemes, given that we can take overhead fully into account, e.g. total added keysize(s). For example, if we were to add something like a McEliese CFS signature (code-based) or an XMSS signature (hash-based) to add authentication, on top of already using some lattice-based cryptosystem like NTRU, we would have two quite large keys, with one or both needing to be distributed, for each relay.
Thanks for your time, and happy holidays!
Many thanks for the proposal, research, and code! Have a great new year!
[0]: http://blog.cr.yp.to/20140213-ideal.html [1]: http://cr.yp.to/talks/2015.04.22/slides-djb-20150422-a4.pdf [2]: https://github.com/NTRUOpenSourceProject/ntru-crypto/blob/master/PATENTS.md [3]: https://gitweb.torproject.org/torspec.git/tree/proposals/249-large-create-ce...
On Thu, Dec 31, 2015 at 3:51 PM, isis isis@torproject.org wrote:
Zhenfei Zhang transcribed 22K bytes:
[...]
In addition, this is a modular design that allows us to use any quantum-safe cryptographic primitives. As a proof of concept, we instantiated the protocol with NTRUEncrypt lattice-based crypto. We implemented the the protocol with NTRU parameters that gives 128 bits security. The code is available at https://github.com/NTRUOpenSourceProject/ntru-tor
Thanks! This is great! Having an implementation to go along with the proposal makes it easier to evaluate. I've already actually looked at your code a couple months ago, but I'll take a second look after the new year and see what (if anything) changed.
However, if we were to go the route of using NTRU, we'd likely want to instead use Dan Bernstein's NTRU Prime parameters, in order to eliminate some of the inherent algebraic structure of the ideal lattice which might possibly be exploited. [0] [1]
I'd also like us to consider the Ring-LWE proposals that Yawning has been working on, but I think that this proposal forms a good basis for future work in all those directions.
(Generally, I'm a bit afraid of being the first adopter of much of anything, or the biggest user of any protocol, but I think we're soon reaching the point where we'll have to.)
Also, what is the current state of patents on NTRU? My understanding is that NTRU is dual-licenced as GPLv2+ and commercial, [2] however, Tor is currently BSD licenced. Would it be necessary to relicense Tor as GPLv2+? Will the GPL exceptions continue to be applied to further patents on optimisations and improvements/protections for NTRU?
Have a look at https://github.com/NTRUOpenSourceProject/ntru-crypto/blob/master/FOSS%20Exce... . If I'm reading that right (and Wendy has seen it too), we have their permission to use their GPL code along with BSD-licensed Tor.
peace, and a happy new year to all,
Hello,
On Thu, 31 Dec 2015 20:51:43 +0000 isis isis@torproject.org wrote: [snip]
I feel like there needs to be some new terminology here. It's certainly not post-quantum secure, but "quantum-safe" doesn't seem right either, because it's exactly the point at which the adversary gains appropriate quantum computational capabilities that it become *unsafe*. If I may, I suggest calling it "pre-quantum secure". :)
Post-quantum forward-secrecy is what I've been using to describe this property.
[snip]
However, if we were to go the route of using NTRU, we'd likely want to instead use Dan Bernstein's NTRU Prime parameters, in order to eliminate some of the inherent algebraic structure of the ideal lattice which might possibly be exploited. [0] [1]
That's a non-trivial amount of work, though I have issues with the parameter selection as well, that the authors of the proposal may be able to qualify.
As I recall, the product form parameter sets are extra encumbered. Apart from key/ciphertext size and a minor performance differential, is there any reason to not use one of the X9.98 parameter sets (Eg: EES613EP1)
Also, what is the current state of patents on NTRU? My understanding is that NTRU is dual-licenced as GPLv2+ and commercial, [2] however, Tor is currently BSD licenced. Would it be necessary to relicense Tor as GPLv2+? Will the GPL exceptions continue to be applied to further patents on optimisations and improvements/protections for NTRU?
There's a FOSS patent grant, and Tim Buktu has a 3BSD NTRU implementation. (NB: The only implementation I looked at in depth is the reference Java code.)
https://github.com/tbuktu/libntru
It additionally supports deciding how encumbered you want the library to be (support for the product form parameter sets can be disabled at compile time).
Note: Debian packaging of the library is stalled due to concerns over licensing, and the bug thread ends in a "take it to legal".
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=802259
The worst case here from our perspective depending on what Debian Legal (or Fedora Legal etc etc etc) thinks about the whole thing is that we would need to feature gate (at compile time) NTRU support till 2017/2020 (depending on parameter set choices).
- quantum-safe authentication: there is no quantum-safe
authentication in this protocol. We believe that authentication can wait, as future (quantum) adversary cannot come back to present time and break authentication. Hence, we use ntor authentication to keep the proposal compact and simple. It will be a future work after this proposal.
Not to delay progress on making Tor post-quantum secure, but if we tackled both encryption and authentication in parallel, we'd better able to compare the advantages and disadvantages of schemes, given that we can take overhead fully into account, e.g. total added keysize(s). For example, if we were to add something like a McEliese CFS signature (code-based) or an XMSS signature (hash-based) to add authentication, on top of already using some lattice-based cryptosystem like NTRU, we would have two quite large keys, with one or both needing to be distributed, for each relay.
I personally don't think that any of the PQ signature schemes are usable for us right now, because the smallest key size for an algorithm that isn't known to be broken is ~1 KiB (SPHINCS256), and we probably can't afford to bloat our descriptors/micro-descriptors that much.
If we decide we want to add PQ authentication with one of the current primitives Right Now, we could do something like:
* Add a SPHINCS256 key to "the list of long term relay keys" and distribute it in descriptors/microdescriptors. (~1KiB binary, lots bigger b64ed).
* The responder signs all the public parameters with their SPHINCS key as part of the key exchange (Go find the SIGMA paper). (+41 KiB binary, for the signature).
But I would view that as premature (there's more important things that need to use PQ signatures first, like the consensus) due to primitive maturity/usability/overhead concerns.
Other changes that should/could be made to the proposal:
* "For 128 bits quantum security, use NTRU_EESS743EP1." should be "For 256 bits" (Section 2.3).
* We have the opportunity (and code in master) to start using the FIPS 202 primitives. Since we need to modify the ntor code to anyway, we should use SHA-3 and SHAKE256 instead of HMAC-SHA256 and HKDF-SHA256 respectively.
* Is it worth migrating our ECC to X448?
I'll be fleshing out the proposal to specify 0x0103 around newhope, since the hybrid construct would be similar past the details of the key exchange itself.
Regards,
On Fri, 2016-01-01 at 11:14 +0000, Yawning Angel wrote:
On Thu, 31 Dec 2015 20:51:43 +0000 isis isis@torproject.org wrote: [snip]
I feel like there needs to be some new terminology here. It's certainly not post-quantum secure, but "quantum-safe" doesn't seem right either, because it's exactly the point at which the adversary gains appropriate quantum computational capabilities that it become *unsafe*. If I may, I suggest calling it "pre-quantum secure". :)
Post-quantum forward-secrecy is what I've been using to describe this property.
Isn't that using "forward security" to denote a weakening when it usually denotes a strengthening?
I personally don't think that any of the PQ signature schemes are usable for us right now, because the smallest key size for an algorithm that isn't known to be broken is ~1 KiB (SPHINCS256), and we probably can't afford to bloat our descriptors/micro-descriptors that much.
Did you mean to talk about the 41ish kb signature here?
I donno that you'll ever beat that 1kb key size with a post-quantum system. There is a lattice based signature scheme and an isogeny based scheme that'll both beat SPHINCS on signature sizes, but I think not so much on key size.
Jeff
p.s. I'd imagine that key size might come from the public key itself proving that it's a SPHINCS public key or doing a simple initial signature or something. If you didn't care during storage that the key is really a key, or what its good for, then a 256 bit fingerprint of a SPHINCS public key would be as good as a SPHINCS public key itself, right? It's dubious that Tor, or anyone really, could use fingerprints in such a context-free way though.
On 02/03/2016 12:12 PM, Jeff Burdges wrote:
I donno that you'll ever beat that 1kb key size with a post-quantum system. There is a lattice based signature scheme and an isogeny based scheme that'll both beat SPHINCS on signature sizes, but I think not so much on key size.
I just wanted to resurrect this old thread to point out that supersingular isogeny key exchange (SIDH) is the isogeny scheme that you're referring to. Using a clever compression algorithm, SIDH only needs to exchange 3072 bits (384 bytes) at a 128-bit quantum security level. This beats SPHINCS by a mile and unlike NTRUEncrypt, fits nicely into Tor's current cell size. I don't know about key sizes, though. If I recall correctly, SIDH's paper also references the "A quantum-safe circuit-extension handshake for Tor" paper that lead to this proposal.
Again, I have very little understanding of post-quantum crypto and I'm just starting to understand ECC, but after looking over https://en.wikipedia.org/wiki/Supersingular_isogeny_key_exchange and skimming the SIDH paper, I'm rather impressed. SIDH doesn't seem to be patented, it's reasonably fast, it uses the smallest bandwidth, and it offers perfect forward secrecy. It seems to me that SIDH actually has more potential for making it into Tor than any other post-quantum cryptosystem.
On Sat, 2 Apr 2016 18:48:24 -0400 Jesse V kernelcorn@riseup.net wrote:
Again, I have very little understanding of post-quantum crypto and I'm just starting to understand ECC, but after looking over https://en.wikipedia.org/wiki/Supersingular_isogeny_key_exchange and skimming the SIDH paper, I'm rather impressed. SIDH doesn't seem to be patented, it's reasonably fast, it uses the smallest bandwidth, and it offers perfect forward secrecy. It seems to me that SIDH actually has more potential for making it into Tor than any other post-quantum cryptosystem.
Your definition of "reasonably fast" doesn't match mine. The number for SIDH (key exchange, when the thread was going off on a tangent about signatures) is ~200ms.
A portable newhope (Ring-LWE) implementation[0] on my laptop can do one side of the exchange in ~190 usec. Saving a few cells is not a good reason to use a key exchange mechanism that is 1000x slower (NTRUEncrypt is also fast enough to be competitive).
nb: Numbers are rough, and I don't have SIDH code to benchmark. newhope in particular vectorizes really well and the AVX2 code is even faster.
On 04/03/2016 02:52 AM, Yawning Angel wrote:
Your definition of "reasonably fast" doesn't match mine. The number for SIDH (key exchange, when the thread was going off on a tangent about signatures) is ~200ms.
A portable newhope (Ring-LWE) implementation[0] on my laptop can do one side of the exchange in ~190 usec. Saving a few cells is not a good reason to use a key exchange mechanism that is 1000x slower (NTRUEncrypt is also fast enough to be competitive).
I have yet to see any SIDH benchmarks either. I checked the citation but I wasn't able to confirm where the ~200ms number came from. Thanks for throwing out specific numbers on Ring-LWE, I wasn't aware that it was so fast.
On Sat, 2016-04-02 at 18:48 -0400, Jesse V wrote:
I just wanted to resurrect this old thread to point out that supersingular isogeny key exchange (SIDH) is the isogeny scheme that that you're referring to. Using a clever compression algorithm, SIDH only needs to exchange 3072 bits (384 bytes) at a 128-bit quantum security level. This beats SPHINCS by a mile and unlike NTRUEncrypt, fits nicely into Tor's current cell size. I don't know about key sizes, though. If I recall correctly, SIDH's paper also references the "A quantum-safe circuit-extension handshake for Tor" paper that lead to this proposal.
I should read up on this compression business since I'd no idea they were so small. At first blush, these SIDH schemes must communicate curve parameters of the curve the isogeny maps to and two curve points to help the other party compute the isogeny on their prime's subgroup, so maybe 3-4 times the size of a curve point, but the curve is far larger than any used with normal ECDH too.
Warning : The signature schemes based on SIDH work by introducing another virtual party employing a third prime. And another more recent scheme needs two additional parties/primes! A priori, this doubles or triples the key materials size, although it's maybe not so bad in practice. Also, these signature scheme have some unusual properties.
Again, I have very little understanding of post-quantum crypto and I'm just starting to understand ECC, but after looking over https://en.wikipedia.org/wiki/Supersingular_isogeny_key_exchange and skimming the SIDH paper, I'm rather impressed. SIDH doesn't seem to be patented, it's reasonably fast, it uses the smallest bandwidth, and it offers perfect forward secrecy. It seems to me that SIDH actually has more potential for making it into Tor than any other post-quantum cryptosystem.
It'll be years before anyone trusts SIDH because it's the youngest. And Ring-LWE has a much larger community doing optimizations, etc.
I like SIDH myself. I delved into it to see if it offered the blinding operation needed for the Sphinx mixnet packet format. It seemingly does not. And maybe no post-quantum system can do so.
All these post-quantum public key systems work by burring the key exchange inside a computation that usually goes nowhere, fails, etc.* In SIDH, it's replacing the kernel of the isogeny, which one can move between curves, with two curve points that let the other party evaluate your isogeny on their subgroup. As the isogenies themselves form only a groupoid, algorithms like Shor observe almost exclusively a failed product, so the QFT rarely yields anything.
As usual, there are deep mathematical questions here like : Has one really hidden the kernel by revealing only the isogeny on a special subgroup? Are there parameterizations of appropriate isogenies in ways that make the QFT dangerous again?
As an aside, there are new quantum query attacks on symmetric crypto like AEZ in: http://arxiv.org/abs/1602.05973 We believe a quantum query attack against symmetric crypto sounds unrealistic of course: http://www.scottaaronson.com/blog/?p=2673 A quantum query attack is completely realistic against a public key system though, so one should expect renewed effort to break the post-quantum systems by inventing new QFT techniques.
On Sun, 2016-04-03 at 06:52 +0000, Yawning Angel wrote:
Your definition of "reasonably fast" doesn't match mine. The number for SIDH (key exchange, when the thread was going off on a tangent about signatures) is ~200ms.
What code were you running? I think the existing SIDH implementations should not be considered optimized. Sage is even used in : https://github.com/defeo/ss-isogeny-software I've no idea about performance myself, but obviously the curves used in SIDH are huge, and the operations are generic over curves. And existing signature schemes might be extra slow due to this virtual third or fourth party. I know folks like Luca De Feo have ideas for optimizing operations that much be generic over curves though.
Around signatures specifically, there are deterministically stateful hashed based, or partially hash based, scheme that might still be useful : One might for example pre-compute a unique EdDSA key for each consensus during the next several months and build the Merkle tree of the public keys of their hashes. Any given consensus entry is vulnerable to a quantum attack immediately after the key gets used, but not the whole Merkle tree of EdDSA keys. A signature costs O(log m) where m is the number of consensuses covered by a single key. It's maybe harder to attack such a scheme while keeping your quantum computer secret. **
Jeff
* I'd be dubious that any non-abelian "group-based" scheme would remain post-quantum indefinitely specifically because they lack this "usually just fails" property. It's maybe related to the issues with blinding operations an the difficulties in making signature schemes.
** I recently found a Merkle tree of encryption keys trick that enables post-quantum secure refresh operation in Taler.
On Sun, 03 Apr 2016 16:37:45 +0200 Jeff Burdges burdges@gnunet.org wrote:
On Sun, 2016-04-03 at 06:52 +0000, Yawning Angel wrote:
Your definition of "reasonably fast" doesn't match mine. The number for SIDH (key exchange, when the thread was going off on a tangent about signatures) is ~200ms.
What code were you running? I think the existing SIDH implementations should not be considered optimized. Sage is even used in : https://github.com/defeo/ss-isogeny-software I've no idea about performance myself, but obviously the curves used in SIDH are huge, and the operations are generic over curves. And existing signature schemes might be extra slow due to this virtual third or fourth party. I know folks like Luca De Feo have ideas for optimizing operations that much be generic over curves though.
http://cacr.uwaterloo.ca/techreports/2014/cacr2014-20.pdf
Is "optimized" in that, it is C with performance critical parts in assembly (Table 3 is presumably the source of the ~200 ms figure from the wikipedia article). As i said, i just took the performance figures at face value.
I'm sure it'll go faster with time, but like you, I'm probably not going to trust SIDH for a decade or so.
Regards,
On 04/03/2016 10:37 AM, Jeff Burdges wrote:
I should read up on this compression business since I'd no idea they were so small. At first blush, these SIDH schemes must communicate curve parameters of the curve the isogeny maps to and two curve points to help the other party compute the isogeny on their prime's subgroup, so maybe 3-4 times the size of a curve point, but the curve is far larger than any used with normal ECDH too.
"Key Compression for Isogeny-Based Cryptosystems". Here's just the abstract: https://eprint.iacr.org/2016/229 and the full paper can be found here: https://eprint.iacr.org/2016/229.pdf
Yawning Angel <yawning@...> writes:
On Sat, 2 Apr 2016 18:48:24 -0400 Jesse V <kernelcorn@...> wrote:
Again, I have very little understanding of post-quantum crypto and I'm just starting to understand ECC, but after looking over https://en.wikipedia.org/wiki/Supersingular_isogeny_key_exchange and skimming the SIDH paper, I'm rather impressed. SIDH doesn't seem to be patented, it's reasonably fast, it uses the smallest bandwidth, and it offers perfect forward secrecy. It seems to me that SIDH actually has more potential for making it into Tor than any other post-quantum cryptosystem.
Your definition of "reasonably fast" doesn't match mine. The number for SIDH (key exchange, when the thread was going off on a tangent about signatures) is ~200ms.
A portable newhope (Ring-LWE) implementation[0] on my laptop can do one side of the exchange in ~190 usec. Saving a few cells is not a good reason to use a key exchange mechanism that is 1000x slower (NTRUEncrypt is also fast enough to be competitive).
nb: Numbers are rough, and I don't have SIDH code to benchmark. newhope in particular vectorizes really well and the AVX2 code is even faster.
Beware that the definition of newhope has changed! The authors have published a new version of this paper and some of the numbers are different. The parameter for the binomial distribution has changed from 12 to 16, the probability of failure has changed from 2^-110 to 2^-64, the core hardness of the attack has increased from 186 to 206 bits on a quantum computer, and the timings have increased slightly too.
I'm not sure that the newhope algorithm has settled down yet. There's also a new paper on IACR called "How (not) to instantiate ring-LWE" which has some ideas on how to choose the error distribution - this might mean that newhope has to change again??
-- lukep
On Wed, 20 Apr 2016 18:30:14 +0000 (UTC) lukep lukep@tutanota.com wrote:
Beware that the definition of newhope has changed! The authors have published a new version of this paper and some of the numbers are different. The parameter for the binomial distribution has changed from 12 to 16, the probability of failure has changed from 2^-110 to 2^-64, the core hardness of the attack has increased from 186 to 206 bits on a quantum computer, and the timings have increased slightly too.
I track the paper and reference code in the implementation I maintain. FWIW, the performance hasn't changed noticeably, unless there's something newer than 20160328.
I'm not sure that the newhope algorithm has settled down yet. There's also a new paper on IACR called "How (not) to instantiate ring-LWE" which has some ideas on how to choose the error distribution - this might mean that newhope has to change again??
Most of the changes since the paper has been released have been minor. The last major algorithmic change I'm aware of was 20151209 which altered the reconciliation mechanism (I don't particularly count the March changes that changed the on-the-wire encoding format to be major, it's just a more compact way to send the same things).
Kind of a moot point since by the time any of this will actually be used in core tor things would have settled. And my gut feeling is RingLWE will have performant, well defined implementations well before SIDH is a realistic option.
Regards,
Thanks Yawning.
Most of the changes since the paper has been released have been minor. The last major algorithmic change I'm aware of was 20151209 which altered the reconciliation mechanism (I don't particularly count the March changes that changed the on-the-wire encoding format to be major, it's just a more compact way to send the same things).
Don't get me wrong, I like the ring-LWE algorithm, I just wanted to sound a note of caution that there's a new version with different parameters, possibly incompatible with the previous version (depending on implementation), and that it's still pretty new so could change, or someone could find a problem with the security proof.
Kind of a moot point since by the time any of this will actually be used in core tor things would have settled. And my gut feeling is RingLWE will have performant, well defined implementations well before SIDH is a realistic option.
How long do you think it would take for tor? Is there a version of Zhang's quantum safe hybrid protocol for newhope?
I track the paper and reference code in the implementation I maintain. FWIW, the performance hasn't changed noticeably, unless there's something newer than 20160328.
I can now see you've got a new version in your github repository. Is this standalone or have you tried incorporating it into tor source code yet?
Thanks
-- lukep
I'd imagine everyone in this thread knows this, but New Hope requires that "both parties use fresh secrets for each instantiation".
I suppose any key exchanges designed around this meshes well enough with ntor, so that's okay. It leaves you relying on ECDH for the key exchange with long term key material though.
I have not read the papers on doing Ring-LWE key exchanges with long term key material, but presumably they increase the key side.
On Wed, 2016-04-20 at 19:00 +0000, Yawning Angel wrote:
And my gut feeling is RingLWE will have performant, well defined implementations well before SIDH is a realistic option.
This is undoubtedly true because too few people are looking into SIDH.
I've been chatting with Luca about writing a "more production ready" implementation, like optimizing the GF(p^2) field operations and things. If that happens, maybe it'll spur more interest.
There is some chance SIDH might wind up being preferable for key exchanges with long term key material.
Jeff
On Fri, 22 Apr 2016 11:41:30 +0200 Jeff Burdges burdges@gnunet.org wrote:
I'd imagine everyone in this thread knows this, but New Hope requires that "both parties use fresh secrets for each instantiation".
Yep. Alice can cache the public 'a' parameter, but everything else needs to be fresh, or really really bad things happen.
I suppose any key exchanges designed around this meshes well enough with ntor, so that's okay. It leaves you relying on ECDH for the key exchange with long term key material though.
Or having a mythical "usable" PQ signature algorithm and doing SIGMA-I or SIGMA-R.
I have not read the papers on doing Ring-LWE key exchanges with long term key material, but presumably they increase the key side.
It's not a massive priority now because active adversaries with quantum computers aren't going to exist for a while IMO.
On Wed, 2016-04-20 at 19:00 +0000, Yawning Angel wrote:
And my gut feeling is RingLWE will have performant, well defined implementations well before SIDH is a realistic option.
This is undoubtedly true because too few people are looking into SIDH.
I've been chatting with Luca about writing a "more production ready" implementation, like optimizing the GF(p^2) field operations and things. If that happens, maybe it'll spur more interest.
I'll be looking forward to seeing the results.
There is some chance SIDH might wind up being preferable for key exchanges with long term key material.
Maybe. Some people have hinted at an improved version of SPHINCS256 being in the works as well. SIDH will result in less bytes on the wire (which is a major advantage in it's favor).
nb: I'm not totally dismissing SIDH, and I do think it has potential. That said, engineering efforts to protect against "giant datacenters in Utah" is something that should have been started a decade ago, and getting something that is adequate and deploy-able "soon" is more important to me than "waiting on perfection".
On Fri, 2016-04-22 at 11:10 +0000, Yawning Angel wrote:
On Fri, 22 Apr 2016 11:41:30 +0200 Jeff Burdges burdges@gnunet.org wrote:
I'd imagine everyone in this thread knows this, but New Hope requires that "both parties use fresh secrets for each instantiation".
Yep. Alice can cache the public 'a' parameter, but everything else needs to be fresh, or really really bad things happen.
I'd assume that 'a' could be generated by both parties from a seed parameter declared by Alice? I haven't noticed it being secretly tweaked by Alice.
If 'a' must be sent, then 'a' would double the key size from one side, no? In that case, one should consider if reversing the Alice and Bob roles of the client and server helped by either (a) adjusting the traffic flow to mask circuit building, or (b) allowing one to stash 'a' in the descriptor. I donno if there are any concerns like the client needing to influence 'a'.
There is some chance SIDH might wind up being preferable for key exchanges with long term key material.
Maybe. Some people have hinted at an improved version of SPHINCS256 being in the works as well.
Ain't clear how that'd work.
There are homomorphic MAC like constructions, like accumulators, which might let you reuse your Merkle tree. I though these usually needed assumptions that Shor's algorithm nukes, like one is f(x,y) = x^7 + 3 y^7 mod N with N=pq secret, for example.
I suppose Nyberg's accumulator scheme is post-quantum, but I though it's huge too. I'm not completely sure if just an accumulator helps anyways.
Jeff
On Fri, 22 Apr 2016 14:58:45 +0200 Jeff Burdges burdges@gnunet.org wrote:
On Fri, 2016-04-22 at 11:10 +0000, Yawning Angel wrote:
On Fri, 22 Apr 2016 11:41:30 +0200 Jeff Burdges burdges@gnunet.org wrote:
I'd imagine everyone in this thread knows this, but New Hope requires that "both parties use fresh secrets for each instantiation".
Yep. Alice can cache the public 'a' parameter, but everything else needs to be fresh, or really really bad things happen.
I'd assume that 'a' could be generated by both parties from a seed parameter declared by Alice? I haven't noticed it being secretly tweaked by Alice.
Indeed. The paper (and code) uses a 256 bit seed value, and SHAKE128.
If 'a' must be sent, then 'a' would double the key size from one side, no? In that case, one should consider if reversing the Alice and Bob roles of the client and server helped by either (a) adjusting the traffic flow to mask circuit building, or (b) allowing one to stash 'a' in the descriptor. I donno if there are any concerns like the client needing to influence 'a'.
There's still an asymmetry in the amount of data sent by each party because the (b, seed) and (u, r) are different lengths, with r being larger than seed (So Bob's response is longer).
The easiest thing to do would be to append padding to (b, seed) (which is what the code I'm writing that uses this primitive does. Stashing 'a' in the descriptor won't save much (eliminating one SHAKE-128 call per side isn't enough of a performance gain to justify not randomizing 'a' for every key exchange IMO).
There is some chance SIDH might wind up being preferable for key exchanges with long term key material.
Maybe. Some people have hinted at an improved version of SPHINCS256 being in the works as well.
Ain't clear how that'd work.
There are homomorphic MAC like constructions, like accumulators, which might let you reuse your Merkle tree. I though these usually needed assumptions that Shor's algorithm nukes, like one is f(x,y) = x^7 + 3 y^7 mod N with N=pq secret, for example.
I suppose Nyberg's accumulator scheme is post-quantum, but I though it's huge too. I'm not completely sure if just an accumulator helps anyways.
I didn't get details unfortunately.
My current assumption is that by the time we need to seriously start thinking about auth and need to design/deploy a new construct around that as part of our threat model, both signatures and key exchange algorithms will be more fleshed out, but in the immediate future, going by what's available and practical pushes me towards NTRUEncrypt/Ring-LWE.
Regards,
Yawning Angel transcribed 4.3K bytes:
On Fri, 22 Apr 2016 14:58:45 +0200 Jeff Burdges burdges@gnunet.org wrote:
On Fri, 2016-04-22 at 11:10 +0000, Yawning Angel wrote:
On Fri, 22 Apr 2016 11:41:30 +0200 Jeff Burdges burdges@gnunet.org wrote:
There is some chance SIDH might wind up being preferable for key exchanges with long term key material.
Maybe. Some people have hinted at an improved version of SPHINCS256 being in the works as well.
Ain't clear how that'd work.
There are homomorphic MAC like constructions, like accumulators, which might let you reuse your Merkle tree. I though these usually needed assumptions that Shor's algorithm nukes, like one is f(x,y) = x^7 + 3 y^7 mod N with N=pq secret, for example.
I suppose Nyberg's accumulator scheme is post-quantum, but I though it's huge too. I'm not completely sure if just an accumulator helps anyways.
I didn't get details unfortunately.
It's not my paper, so I probably shouldn't give too much away, but…
Essentially, there are two different optimisations being discussed: one which allows faster signature times via batching, which can optionally also be used to decrease the size of the signatures (although assuming you're sending several signatures in succession to the same party). That optimisation is maybe useful for something like PQ Bitcoin; probably not so much for Tor.
The second optimisation is applying the ideas from XMSS-T [0] to SPHINCS. The authors say that the size of the public keys will be 64 bytes. My napkin calculation says, given that our current network bandwidth capacity is ~175 Gbit/s, and ~75 Gbit/s is utilised with 1 Gbit/s used for dirfetches, adding 64 bytes to each microdesc increases the dirfetch bandwidth overhead by about 33%.
Hence, the latter may be a feasible method for extending our post-quantum security to the authentication, however this _doesn't_ mean we should stall on making the KEX post-quantum forward secure. As Yawning pointed earlier in this thread, "there is a database in Utah storing all the handshakes and we should've been working against that ten years ago", and also:
My current assumption is that by the time we need to seriously start thinking about auth and need to design/deploy a new construct around that as part of our threat model, both signatures and key exchange algorithms will be more fleshed out, but in the immediate future, going by what's available and practical pushes me towards NTRUEncrypt/Ring-LWE.
On that note, I've been working on a Tor proposal for a Ring-LWE + X25519 hybrid KEX. It'll be ready for design review soon.
[0]: https://joostrijneveld.nl/papers/multitarget_xmss/
Best Regards,
On Sun, 2016-04-03 at 15:36 +0000, Yawning Angel wrote:
http://cacr.uwaterloo.ca/techreports/2014/cacr2014-20.pdf
Is "optimized" in that, it is C with performance critical parts in assembly (Table 3 is presumably the source of the ~200 ms figure from the wikipedia article). As i said, i just took the performance figures at face value.
I'm sure it'll go faster with time, but like you, I'm probably not going to trust SIDH for a decade or so.
There is a new SIDH library from MS Research : https://eprint.iacr.org/2016/413.pdf https://research.microsoft.com/en-us/projects/sidh/
On Tue, 2016-04-26 at 15:05 +0000, isis wrote:
It's not my paper, so I probably shouldn't give too much away, but…
Essentially, there are two different optimisations being discussed: one which allows faster signature times via batching, which can optionally also be used to decrease the size of the signatures (although assuming you're sending several signatures in succession to the same party). That optimisation is maybe useful for something like PQ Bitcoin; probably not so much for Tor.
It's maybe worth keeping this sort of tool in mind for tools like co-signing.
On Fri, 29 Apr 2016 20:54:18 +0200 Jeff Burdges burdges@gnunet.org wrote:
On Sun, 2016-04-03 at 15:36 +0000, Yawning Angel wrote:
http://cacr.uwaterloo.ca/techreports/2014/cacr2014-20.pdf
Is "optimized" in that, it is C with performance critical parts in assembly (Table 3 is presumably the source of the ~200 ms figure from the wikipedia article). As i said, i just took the performance figures at face value.
I'm sure it'll go faster with time, but like you, I'm probably not going to trust SIDH for a decade or so.
There is a new SIDH library from MS Research : https://eprint.iacr.org/2016/413.pdf https://research.microsoft.com/en-us/projects/sidh/
Yeah, it's neat and I'm glad people are poking at it (I honestly do like the paper).
On a i7-5600U (Turbo Boost, Planatary Alignment, etc grain of salt), Alice's side of the SIDH benchmarks at:
* ~25 ms (Generate/Exchange) * ~40 ms (Generate/Validate/Exchange)
Using BigMont adds a few ms, while I think it's cute, X448 exists.
(Yes, I know 384 bits > 224 bits, but it's not worth the perf/key size penalty for what amounts to "using bigger numbers makes my Tin Foil Hat crinkle less").
... and in the context of tor, we do X25519 on the side anyway ...
The general construct in the Tor proposal is flexible as to which PQ key exchange algorithm is used, so I'm some what more inclined to push for shipping something that is deployable sooner (being wrong here at worst means having to switch algorithms), than waiting for things to solidify more.
Regards,
On 12/28/2015 11:34 PM, Zhenfei Zhang wrote:
1.2 Motivation: Disaster resilience
We are really trying to protect against the disastrous situation of one key being entirely compromised. By introducing a second cryptographic primitive, namely, NTRUEncrypt, we ensure that the system remains secure in those extreme scenarios.
[snipped]
2.1.1 Achieved Property:
[snipped]
2) The proposed key exchange method is disaster resilient: The protocol depends on two cryptographic primitives. Compromising one does not break the security of the whole system.
I'm a little confused about what exactly is meant by "disaster resilience" here.
If the disaster is that there's a major cryptanalytic breakthrough against ECC (or, more likely, against NTRU), then I think it would be better to say explicitly that a design goal is that the handshake should be no weaker than either primitive, rather than to say the threat is against "a disaster".
However, the wording in §1.2 seems to indicate that the goal is to defend against an attacker who can, e.g., obtain a relay's ECC key without also obtaining their NTRU key. Both keys are in the same security perimeter, so I'm not sure what this really achieves in practice.
I think it would be good to remove the word 'disaster' entirely and say what exactly the threat is and what the design goal is: the threat is an attack on one of the cryptosystems, and the goal is that the handshake should be no weaker than either.
Cheers, Henry de Valence
Zhenfei Zhang <zzhang@...> writes:
2.2.2 Handshake To perform the handshake, the client needs to know an identity key
digest
for the server, and an ntor onion key (a curve25519 public key) for that server. Call the ntor onion key "B". The client generates a temporary key pair: x, X = KEYGEN(); an NTRU temporary key pair:
QSSK, QSPK = QSKEYGEN();
================================================================================
and generates a client-side handshake with contents: NODEID Server identity digest [ID_LENGTH bytes] KEYID KEYID(B) [H_LENGTH bytes] CLIENT_PK X [G_LENGTH bytes]
QSPK QSPK [QSPK_LENGTH bytes]
================================================================================
The server generates an ephemeral curve25519 keypair: y, Y = KEYGEN(); a ephemeral "parallel" secret for encryption with NTRU:
PAR_SEC P [H_LENGTH bytes] and computes:
C = ENCRYPT( P | B, QSPK); Then it uses its ntor private key 'b' to compute an ECC secret E = EXP(X,y) | EXP(X,b) | B | X | Y and computes:
secret_input = E | P | QSPK | ID | PROTOID
#pre secret_input = E | ID | PROTOID
KEY_SEED = H(secret_input, t_key) verify = H(secret_input, t_verify)
auth_input = verify | B | Y | X | C | QSPK | ID | PROTOID | "Server"
#pre auth_input = verify | B | Y | X | ID | PROTOID | "Server"
================================================================================
The server's handshake reply is: SERVER_PK Y [G_LENGTH bytes] AUTH H(auth_input, t_mac) [H_LENGTH bytes]
QSCIPHER C [QSPK_LENGTH bytes]
================================================================================
The client then checks Y is in G^*, and computes E = EXP(Y,x) | EXP(B,x) | B | X | Y
P' = DECRYPT(C, QSSK) extract P,B from P' (P' = P|B), verifies B, and computes
secret_input = E | P | QSPK | ID | PROTOID
#pre secret_input = E | ID | PROTOID
KEY_SEED = H(secret_input, t_key) verify = H(secret_input, t_verify)
auth_input = verify | B | Y | X | C | ID | PROTOID | "Server"
#pre auth_input = verify | B | Y | X | ID | PROTOID | "Server"
The client verifies that AUTH == H(auth_input, t_mac). Both parties now have a shared value for KEY_SEED. This value will
be used
during Key Derivation Function - KDF-RFC5869 (see 5.2.2 tor-spec.txt)
Hi, I'm trying to understand the hybrid protocol that's described here. The server generates the parallel secret PAR_SEC or P and then computes C = ENCRYPT( P | B | Y, QSPK); The client decrypts C to get P and then uses it combination with the ECC secret E: secret_input = E | P | QSPK | ID | PROTOID
So E is secret, P is generated by the server, QSPK ID and PROTOID are all public. So IF ECC is broken and IF the server has been compromised (big IF's!) then everything is known.
I guess my point is that the client isnt contributing any secret information to the quantum-safe part of KEY_SEED. Is that OK?
-- lukep
On Thu, 3 Mar 2016 16:33:42 +0000 (UTC) lukep lukep@tutanota.com wrote:
Hi, I'm trying to understand the hybrid protocol that's described here. The server generates the parallel secret PAR_SEC or P and then computes C = ENCRYPT( P | B | Y, QSPK); The client decrypts C to get P and then uses it combination with the ECC secret E: secret_input = E | P | QSPK | ID | PROTOID
So E is secret, P is generated by the server, QSPK ID and PROTOID are all public. So IF ECC is broken and IF the server has been compromised (big IF's!) then everything is known.
If the server is compromised, the client loses regardless of handshake mechanism because said compromised server has access to the plaintext.
I guess my point is that the client isnt contributing any secret information to the quantum-safe part of KEY_SEED. Is that OK?
Yes. I'd rather both sides contribute, but since `P | B | Y` is transmitted encrypted to a ephemeral key generated by the client, baring the "the server is compromised" case (which is irrelevant) the construct should hold.
See RFC 4432, RFC 5246 7.4.7.1 for this sort of thing being done with RSA.
Note: The Ring-LWE variant of this hybrid construct would fulfill the "both sides contribute material" clause (yay).
Regards,
On Thu, Mar 3, 2016 at 3:16 PM, Yawning Angel yawning@schwanenlied.me wrote:
On Thu, 3 Mar 2016 16:33:42 +0000 (UTC) lukep lukep@tutanota.com wrote:
Hi, I'm trying to understand the hybrid protocol that's described here. The server generates the parallel secret PAR_SEC or P and then computes C = ENCRYPT( P | B | Y, QSPK); The client decrypts C to get P and then uses it combination with the ECC secret E: secret_input = E | P | QSPK | ID | PROTOID
So E is secret, P is generated by the server, QSPK ID and PROTOID are all public. So IF ECC is broken and IF the server has been compromised (big IF's!) then everything is known.
If the server is compromised, the client loses regardless of handshake mechanism because said compromised server has access to the plaintext.
Also: if the client has a bad RNG, the client loses whether the material it contributes to the eventual secret is secret or public.
I guess my point is that the client isnt contributing any secret information to the quantum-safe part of KEY_SEED. Is that OK?
Yes. I'd rather both sides contribute, but since `P | B | Y` is transmitted encrypted to a ephemeral key generated by the client, baring the "the server is compromised" case (which is irrelevant) the construct should hold.
See RFC 4432, RFC 5246 7.4.7.1 for this sort of thing being done with RSA.
Note: The Ring-LWE variant of this hybrid construct would fulfill the "both sides contribute material" clause (yay).
Just to be careful here, both sides do contribute material (because the QS ciphertext, which is derived from P, is hashed into secret_input). The difference is that the client doesn't contribute *secret* material. However, it's hard to come up with an attack model in which this actually matters, even though there are obviously more warm fuzzies if the secret material comes from both sides.
Cheers,
William