In reference to: https://lists.torproject.org/pipermail/tor-dev/2011-November/002999.html
FOR A HASH FUNCTION: SHA256, switching to SHA3 in 2012 when it comes out. It might be worthwhile waiting for SHA3 in most places and skipping over the SHA256 stage entirely.
The AES contest resulted in a cipher that was much faster than 3-DES and probably safer as well.
It is looking like the SHA-3 contest will result in a hash function that is slightly slower than SHA-256, and not obviously safer either!
There are some details about the performance issue: the most efficient SHA-3 candidates are faster than SHA-256 on large, expensive, powerful x86_64 servers, and they are faster on long messages (more than a couple thousand bytes). This almost certainly doesn't matter to the tor project! (Nor, I suspect, to almost anyone else.)
I'm guessing (sorry for my ignorance about these important facts) that tor uses secure hashes in two ways: first as the "nails" holding the crypto together, such as in commitments, key-derivation, HMAC, and so forth, and second to integrity-check the bulk data in chunks that are approximately "packet sized" -- a few thousand bytes. On a powerful x86_64 server for 4096 bytes [1], the most efficient SHA-3 candidate (Blake-256) takes about 33,000 cycles and SHA-256 takes about 73,000. So the difference is about 40,000 CPU cycles. Assuming that all of this is done on a single core of a 3.4 GHz chip, that means SHA-256 takes about 12 microseconds longer to hash this 4096-byte packet.
I don't think that makes much of a difference to anyone. You'd have to process data at more than 190,000,000 bytes per second before this would exceed your ability to do it that with SHA-256 on a single one of the 4 cores that come in that chip. Is that a realistic amount of data for a single tor node to process per second in forseeable future? (Honest question: I have no idea if it is although I would guess not.) Anyway, if you *do* want to process more than 190 MBytes/s on a fancy server in the next few years, you can probably just use more than one of its cores.
On the other hand, what if someone wants to deploy Tor in a 32-bit ARM CPU such as in a Freedom Box or in a smart phone. When it is doing 4096-byte packets, Blake-256 is actually less efficient than SHA-256 by about 36,000 cycles. Since the chip in that device is running at a slower clock rate (maybe 800 MHz), then it takes 45 microseconds more to hash that 4096-byte packet with Blake-256 than with SHA-256. If you hash those packets with the slower of the two algorithms (Blake-256), you could handle about 25 MBytes/s using 100% of the only CPU in the system. If you used SHA-256 instead you could handle 35 MBytes/s. If you are using say, 90% of that CPU for other tasks. such as playing a game or watching a movie while running Tor in the background, or even playing a movie which is streaming in over Tor live, then this is the difference between being able to process 2.5 MBytes/s (Blake-256) and 3.5 MBytes/s (SHA-256). That seems be a difference that might matter in practice, unlike the performance difference on expensive x86_64 servers.
Okay, what about the "not obviously safer" part? I think there was a bit of a panic a few years ago, in the aftermath of Wang Xiaoyun's breakthrough on SHA-1, that someone might suddenly figure out a way to find collisions in SHA-256. This panic spurred the creation of the SHA-3 project. However, it seems like in the intervening years nobody has published any techniques that really threaten the safety of SHA-256, so now I'm personally no longer so confident that SHA-3 candidates like Blake will endure longer before someone finds a fatal flaw in them than SHA-256 will. SHA-256 has endured substantial analysis by experts for about a decade now. Blake and its fellow competitors have had about three years.
I'm not saying that I'm confident that SHA-256 will outlast Blake! I'm saying it could go either way.
Bottom line: I would probably move ahead toward SHA-256 and let SHA-3 mature for a few extra years before planning to move to that, if I were you.
Regards,
Zooko