-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
On 05/20/2015 12:18 AM, tor-dev-request@lists.torproject.org wrote:
Furthermore, there is the question of time. As .tor-names are pinned (if you have a name, you get to keep it 'forever', right?), an adversary may invest into the required resources to make the attack succeed* **briefly* (i.e. get the required majority), then re-assign names to himself. New honest nodes would pick up this hacked consensus, and thus it would persist even after the adversary lost the majority required to establish it. This is relevant, as an attack against Tor user's anonymity only impacts the users as long as the attack itself lasts, so the gains between the two attacks (temporary vs. "forever") change the economic incentives for performing them.
So I'm not sure it is such an "obvious" choice to just rely on an honest majority of (long-term/etc.) Tor routers. I'm not saying it is bad; however, simply saying that if those routers are compromised all is lost anyway is not quite correct.
To carry out the attack you describe, they would need to have control of enough colluding Tor nodes so that they control the largest agreeing subset in the Quorum. It's not about PoW to control the Quorum, the Quorum is a group of Tor routers. My design also assumes that there is no dynamic compromise of Tor routers (there's no incentive for an attacker to target Tor routers because of OnioNS) so we can consider a static level of compromise. As I've shown in my analysis, if the Quorum is large enough, the chances of selecting a malicious Quorum, either per-selection or cumulatively, is extremely low even at Tor-crippling levels of collusion.
Ok, let's assume a c4.xlarge EC2 instance (which is ~i7) takes 4h to do this (on all cores). For one month, the price is USD 170, which means the registration cost is 70 cents/name (for eternity, or do I have to do this repeatedly? Don't recall if you require a fresh PoWs). Anyway, 4h sounds pretty inconvenient to a user, but as you can see is still nothing for a professional domain name squatter who today pays closer to 100x that to squat for a year. I predict most 'short' names will be taken in no time if this is deployed.
With Namecoin, you have an inherent limit on the rate at which names can be registered. Now, once people start squatting tons of .tor names, maybe even your bandwidth advantage disappears as the consensus may become rather large.
That's a fair point. It's a hard problem to solve. It's subtle, but I also put in a requirement that the network must also ensure that the registration points to an available hidden service. Thus it forces innocent users and attackers to also spin up a hidden service. It's not foolproof, but it's better than nothing. I've also been thinking about a proof-of-stake solution wherein the network only accepts a registration if the destination HS has been up for > X days. Another idea is to have the Quorum select a random time during the week, test for the availability of the hidden service, and then sign whether they saw the HS or not. Then the next Quorum could repeat this test, check the results from the previous Quorum, and void the Record if they also observed that the hidden service was down. I like both of these ideas, but I have not yet solidified their implementation so I was not ready to announce them in the paper.
Well, I prefer my hidden services to be really hidden and not public. I understand that this weakness is somewhat inherent in the design, but you should state it explicitly.
Too late, your hidden services are already leaking across Tor's distributed hash table. There are even Tor technical reports and graphs on metrics.torproject.org which count them, which I assume also implies that they are enumerated. I can't remember where I read it, but I do recall reading some report about how the researcher spun up a number of hidden services, didn't tell anyone about them, but then observed that someone connected to them anyway from time to time. Someone out there is enumerating HSs. Tor's HS protocol isn't designed to hide the existence of HSs, my system isn't either. I can state it explicitly, but there's no practical way around it as far as I can see.
See, that's the point: Namecoin (and your system) assume a different adversary model than what Zooko intended to imply when he formulated his triangle. When Zooko said "secure", he meant "against an adversary that does have more CPU power than all of the network combined and unlimited identities". When you say "secure", you talk about Namecoin's adversary model where the adversary is in the minority (CPU and identity/bandwidth-wise).
Thus, it is unfair for you to say that your system 'solves' Zooko's triangle, as you simply lowered the bar.
As you say, Namecoin assumes that it has more CPU power than adversaries. Perhaps I should have clarified more explicitly: even with unlimited CPU power, (setting aside the potential for cryptographic breaks) an attacker would not be able to compromise the Quorum and thus OnioNS as a whole. I am assuming that more Tor routers are controlled by honest sysadmins than by adversaries. An adversary could spin up many new Tor routers in an attempt to increases his chances of controlling the Quorum, but he also has to earn the required router flags, which is costly. Anyone who remembers the Lizard Squad incident recalls how quickly Sybil attacks are dealt with. I can theoretically extend the size of the Quorum to the size of the Tor network, so then the attacker has to gain control of the Tor network such that his colluding nodes form the largest group of Tor routers signing the same Page. It's not about CPU power, it's about the honesty of nodes in the Tor network. That gives me the globally collision-free property. Perhaps I have lowered the bar, but I do think it's a bit higher than Namecoin because OnioNS is only dependent on the distribution of identities, and not the distribution of CPU power.
So if this is successfully deployed and massively used, I could see the NSA/FBI/CIA team up, buy up computing and network resources globally for a month (or however long it takes) to take control of _all_ established high-profile hidden service sites. At least that's plausible enough for me.
As I said, to break OnioNS they would have to introduce many routers into the Tor network in order to increase their chances of gaining control of the Quorum. You can think of hidden services here like owners in Namecoin; they just control their information, which is orthogonal to the actual functionality of the system.
Could we please make the protocol a bit more general than this?
Yes, I will look into it. Your description is helpful, but if you want to write up a protocol describing what you want on your end, I'll merge it into my protocol, and then we'll have a protocol that is compatible with both of our needs. I would be happy to modify my software accordingly.
On 05/23/2015 06:26 PM, OnioNS Dev wrote:
My design also assumes that there is no dynamic compromise of Tor routers (there's no incentive for an attacker to target Tor routers because of OnioNS)
I can live with explicitly stated design assumptions, but the claim that there is "no incentive for an attacker to target Tor routers because of OnioNS" is rather wild.
With Namecoin, you have an inherent limit on the rate at which names can be registered. Now, once people start squatting tons of .tor names, maybe even your bandwidth advantage disappears as the consensus may become rather large.
That's a fair point. It's a hard problem to solve. It's subtle, but I also put in a requirement that the network must also ensure that the registration points to an available hidden service. Thus it forces innocent users and attackers to also spin up a hidden service. It's not foolproof, but it's better than nothing.
Interesting. Is a powerful adversary able to prevent registration by somehow denying/delaying access to the new ".onion" service and concurrently submitting a competing registration for the same name? I remember such attacks being discussed for DNS, where a candidate's search for available names might cause those to be quickly reserved by some automatism as a means to extort name re-assignment fees. Just wondering if you considered this possibility. (IIRC Namecoin defends against this by having an additional commit-and-reveal process where the name is first reserved without the name itself being revealed).
I've also been thinking about a proof-of-stake solution wherein the network only accepts a registration if the destination HS has been up for > X days.
Can a HS have more than one name?
Another idea is to have the Quorum select a random time during the week, test for the availability of the hidden service, and then sign whether they saw the HS or not. Then the next Quorum could repeat this test, check the results from the previous Quorum, and void the Record if they also observed that the hidden service was down. I like both of these ideas, but I have not yet solidified their implementation so I was not ready to announce them in the paper.
Sure, good time to discuss them then ;-).
Well, I prefer my hidden services to be really hidden and not public. I understand that this weakness is somewhat inherent in the design, but you should state it explicitly.
Too late, your hidden services are already leaking across Tor's distributed hash table.
Today, yes. Tomorrow, who knows; I'm still hoping that the next generation of HS will fix that, and hope try to get Tor to accept the GNS-method for encrypting information in the DHT. Which, btw, is pretty generic (we also use it in GNUnet-file sharing, and I have other plans as well). In fact, I think if you look at the GNS crypto closely, it might offer a way to encrypt most information in any DHT (and offer confidentiality against an adversary that cannot guess the name/label/keyword / perform a confirmation attack).
There are even Tor technical reports and graphs on metrics.torproject.org which count them, which I assume also implies that they are enumerated.
You are totally right about the status quo. I just would point out that this may not be true in 2020 ;-).
It's not about CPU power, it's about the honesty of nodes in the Tor network.
I understand that. But whether you do it on IPs, bandwidth or CPU, you did lower the bar on the adversary.
That gives me the globally collision-free property. Perhaps I have lowered the bar, but I do think it's a bit higher than Namecoin because OnioNS is only dependent on the distribution of identities, and not the distribution of CPU power.
I agree that it is probably easier to mount a 51% CPU-attack against Namecoin than an attack against the OnioNS quorum.
Could we please make the protocol a bit more general than this?
Yes, I will look into it. Your description is helpful, but if you want to write up a protocol describing what you want on your end, I'll merge it into my protocol, and then we'll have a protocol that is compatible with both of our needs. I would be happy to modify my software accordingly.
I agree that we should have a write-up, but have to add that I hope to delegate most of the writing to Jeff ;-).
Happy hacking!
Christian