What's the long-term effect of Heartbleed on Tor?
* Should we consider every key that was created before Tuesday a bad key and lower their consensus weight? * Should authorities scan for bad OpenSSL versions and force their weight down to 20?
A lot of relays will continue running bad OpenSSL versions which seriously hurts the security of Tor. A month from now the NSA/CGHQ/CIVD/etc may know the private keys of a large chunk of these relays and possibly be able to decode a big chunk of traffic...
Tom
- Should authorities scan for bad OpenSSL versions and force their weight
down to 20?
I'd be interested in hearing people's thoughts on how to do such scanning ethically (and perhaps legally). I was under the impression the only way to do this right now is to actually trigger the bounds bug and export some quantity (at least 1 byte) of memory from the vulnerable machine.
According to Qualys, they have developed a test that "verifies the problem without retrieving any bytes from the server, other than the bytes we send in the heartbeat request": https://community.qualys.com/blogs/securitylabs/2014/04/08/ssl-labs-test-for...
Best regards, Alexander --- PGP Key: 0xC55A356B | https://dietrich.cx/pgp
On 2014-04-09 20:51, Paul Pearce wrote:
- Should authorities scan for bad OpenSSL versions and force their
weight down to 20?
I'd be interested in hearing people's thoughts on how to do such scanning ethically (and perhaps legally). I was under the impression the only way to do this right now is to actually trigger the bounds bug and export some quantity (at least 1 byte) of memory from the vulnerable machine. _______________________________________________ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
I just had a quick look at the code that caused the bug (good overview at http://blog.existentialize.com/diagnosis-of-the-openssl-heartbleed-bug.html). The problem is that a length (unsigned short) is read from the incoming data but then it doesn't check whether there's actually enough data to read all of it.
The only way I can think of to check this is to manipulate the payload length. Making this length something lower than what we actually send should return in only a part of the original data being sent back. Of course this is behavior in both the bad and good versions, so we can't check it that way. We could also make the length field something higher than what we actually send, but this would mean getting back data.
*However*, if there's a way to specify the data it sends back, that wouldn't be a problem (I'm no legal specialist though). I have not yet tested my theory, but sending a few extra bytes in the heartbeat message (and of course incrementing 'length' in the 'ssl3_record_st' struct) should do that. It would allow causing the server to return data the client sent. If it's not sent back, the server isn't vulnerable. No random memory is read as the server did in fact allocate the memory, it's simply not supposed to use it.
Just a thought, maybe someone with more knowledge of these things can confirm it?
Tom
Alexander Dietrich schreef op 09/04/14 21:07:
According to Qualys, they have developed a test that "verifies the problem without retrieving any bytes from the server, other than the bytes we send in the heartbeat request": https://community.qualys.com/blogs/securitylabs/2014/04/08/ssl-labs-test-for...
Best regards, Alexander
PGP Key: 0xC55A356B | https://dietrich.cx/pgp
On 2014-04-09 20:51, Paul Pearce wrote:
- Should authorities scan for bad OpenSSL versions and force their
weight down to 20?
I'd be interested in hearing people's thoughts on how to do such scanning ethically (and perhaps legally). I was under the impression the only way to do this right now is to actually trigger the bounds bug and export some quantity (at least 1 byte) of memory from the vulnerable machine. _______________________________________________ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
*However*, if there's a way to specify the data it sends back, that wouldn't be a problem (I'm no legal specialist though). I have not yet tested my theory, but sending a few extra bytes in the heartbeat message (and of course incrementing 'length' in the 'ssl3_record_st' struct) should do that. It would allow causing the server to return data the client sent. If it's not sent back, the server isn't vulnerable. No random memory is read as the server did in fact allocate the memory, it's simply not supposed to use it.
If I get you in the right way I think this is what you are asking for: https://github.com/FiloSottile/Heartbleed This guy is sending a string in and reads it back.
BR Felix
Felix Büdenhölzer schreef op 10/04/14 22:13:
*However*, if there's a way to specify the data it sends back, that wouldn't be a problem (I'm no legal specialist though). I have not yet tested my theory, but sending a few extra bytes in the heartbeat message (and of course incrementing 'length' in the 'ssl3_record_st' struct) should do that. It would allow causing the server to return data the client sent. If it's not sent back, the server isn't vulnerable. No random memory is read as the server did in fact allocate the memory, it's simply not supposed to use it.
If I get you in the right way I think this is what you are asking for: https://github.com/FiloSottile/Heartbleed This guy is sending a string in and reads it back.
BR Felix
Had a quick look at the code. It's almost doing what I wrote, though it's still trying to actively exploit the issue by asking for 100 extra bytes (bleed/heartbleed.go line 36, "len(payload)+100") which will be unknown.
Anyway, I tested my thoughts from yesterday and it turns out it won't work because the idea is flawed. I do wonder what happens when a second ssl3_record_st frame is sent together with the heartbeat exploit. Would you get the next frame back, as it'll be next in the stream? But that would only always work if you can guarantee it's being read by OpenSSL in the same recv() call.
Tom
2014-04-09 20:51 GMT+02:00 Paul Pearce pearce@cs.berkeley.edu:
- Should authorities scan for bad OpenSSL versions and force their weight
down to 20?
I'd be interested in hearing people's thoughts on how to do such scanning ethically (and perhaps legally). I was under the impression the only way to do this right now is to actually trigger the bounds bug and export some quantity (at least 1 byte) of memory from the vulnerable machine.
Considering the consequences of having (a lot of) vulnerable machines in the network, wouldn't it be unethical NOT to do such kind of testing? I mean, basically every vulnerable relay endangers its users by making it possible to decrypt their communications. I strongly feel that the benefits (securing the network) outweigh the costs (exploiting the vulnerable machines and reading 1 byte of memory, but discarding them). Especially seeing that anybody would be able to perform the exploit, I don't see moral problems in such an aproach.
How this works out legally I of course have no idea.
TvdW
- Should we consider every key that was created before Tuesday
You'd need to also know the key was created by vulnerable openssl 1.0.1 versions, didn't already disable heartbeat, etc. That data isn't announced in the consensus. And those that weren't vulnerable may be happy continuing with their uptime/key.
On Wed, Apr 9, 2014 at 2:51 PM, Paul Pearce pearce@cs.berkeley.edu wrote:
I'd be interested in hearing people's thoughts on how to do such scanning ethically (and perhaps legally).
That's an interesting dual-ish question, given we don't own them, often have no real contact means, and yet they're part of us in some voluntary fashion. I don't have any good suggestion on that other than collecting private data, as opposed to statistical surveys, is a problem area.
If we knew which were subject to the bug, the long term goal should be to blacklist their fingerprints. Most uncontactable operaters will get the clue after a few rounds of that and/or visiting tpo for new releases due to consensus version deprecation.
If you browse onions you may find some anonymous researchers who conduct their activities via exits, publish their results on onions, and announce them in various fora. I've not yet seen anyone cataloging this bug as it relates to Tor in that manner.
tor-relays@lists.torproject.org