I too have been following the development of SHA-3 and will toss in my 2c here.
On 11/01/2011 06:50 AM, Watson Ladd wrote:
Turns out that almost everything you said about SHA3 vs SHA256 performance is wrong: http://bench.cr.yp.to/impl-hash/blake256.html http://bench.cr.yp.to/impl-hash/blake256.html Blake256 performs better except on the Cortex A. On the ARM v6 it outperforms SHA256. This includes the ppc32, hardly anyones idea of a server powerhouse.
I don't know about the specific benchmarks you mentioned, but most of the choices fall in this range of 5 to 12 cycles per hashed byte on modern CPUs.
SHA-3 is also being developed with attention to the amount of circuitry ("die area") needed to implement it in hardware. So it's possible that hardware acceleration will appear for SHA-3 sooner/instead of SHA-2.
Furthermore, crypto efficiency is less likely to be a bottleneck on a client then a node:
Desktop PCs with a 50 W CPU are shrinking relative to the whole client pie. Mobile devices are the growing slice, there the concerns are different: is hardware acceleration available? and what is power consumption?
If I had to guess what would be most available and power-efficient on mobile devices 5 years from now, I'd guess SHA-3.
server architectures matter much more because we do a lot more crypto on them. (This isn't true for each connection but servers handle more connections then clients.)
How big of a network pipe does a dedicated Tor server need to bottleneck on the crypto?
Doesn't the architecture of Tor prefer a larger number of smaller nodes?
Secondly SHA256 is already weaker then an ideal hash function. Joux's multicollision attack works on all Merkel-Damgard constructions, and gives multicollisions faster then is possible for an ideal hash.
Agreed, SHA-3 will fix some problems. Some of these things we've been working around so long that they seem normal.
Length extension attacks make HMAC use 2 hashes instead of 1, something that any speed comparison should remember. (HMAC is a bad idea anyway: quadratic security bounds are not the best possible, we have to use nonces anyway to prevent replay attacks, so Wegman-Carter is a better idea for better in{faster, more secure}. GCM would be an example of this.)
I know Wegman-Carter is not new, but where is it being used in practice?
It looks like NIST took a while to figure out the security on GCM:
http://www.csrc.nist.gov/groups/ST/toolkit/BCM/documents/comments/CWC-GCM/Fe... http://csrc.nist.gov/groups/ST/toolkit/BCM/documents/Joux_comments.pdf
As a KDF none of this really matters,
What matters for a KDF is some assurance that the attacker will not have access to a significantly faster implementation than the defender. I believe scrypt has the best claim to that right now, although something based on the arcfour algorithm could do a little better.
This is a case where performance for the defender translates to additional security (he can set the iteration count higher).
Having benchmarks on optimized hardware implementations is thus important.
and for signatures collision resistance is still the most important thing. But sometimes we do depend on random oracle assumptions in proofs, and SHA3 is designed to be a better approximation to a random oracle then SHA2.
There's sometimes also a benefit of being with the current NIST recommendation. I suspect more users will migrate off of SHA-1 to SHA-3 than they will to SHA-2.
NIST may eventually 'deprecate' SHA-2 in favor of SHA-3 due to just the length extension issue. Which is not to say that I think there's a real problem using SHA-2 correctly, only that you may end up having to explain repeatedly why it's not a problem.
- Marsh