Hello,
As we know, hidden services can be useful for all kinds of legitimate things (Pond's usage is particularly interesting), however they do also sometimes get used by botnets and other problematic things.
Tor provides exit policies to let exit relay operators restrict traffic they consider to be unwanted or abusive. In this way a kind of international group consensus emerges about what is and is not acceptable usage of Tor. For instance, SMTP out is widely restricted.
Has there been any discussion of implementing similar controls for hidden services, where relays would refuse to act as introduction points for hidden services that match certain criteria e.g. have a particular key, or whose key appears in a list downloaded occasionally via Tor itself. In this way relay operators could avoid their resources being used for establishing communication with botnet CnC servers.
Obviously such a scheme would require a protocol and client upgrade to avoid nodes building circuits to relays that then refuse to introduce.
The downside is additional complexity. The upside is potentially recruiting new relay operators.
On Mon, Jul 21, 2014 at 12:34:50AM +0200, Mike Hearn wrote:
Hello,
As we know, hidden services can be useful for all kinds of legitimate things (Pond's usage is particularly interesting), however they do also sometimes get used by botnets and other problematic things.
Tor provides exit policies to let exit relay operators restrict traffic they consider to be unwanted or abusive. In this way a kind of international group consensus emerges about what is and is not acceptable usage of Tor. For instance, SMTP out is widely restricted.
This isn't about 'acceptable usage of Tor', this is necessary compromise to limit exit operators' exposure to ISP harrassment. No analogous situation applies for encrypted traffic crossing a middle relay.
Has there been any discussion of implementing similar controls for hidden services, where relays would refuse to act as introduction points for hidden services that match certain criteria e.g. have a particular key, or whose key appears in a list downloaded occasionally via Tor itself. In this way relay operators could avoid their resources being used for establishing communication with botnet CnC servers.
Obviously such a scheme would require a protocol and client upgrade to avoid nodes building circuits to relays that then refuse to introduce.
The downside is additional complexity. The upside is potentially recruiting new relay operators.
The ability to do this implies the ability for intro points to learn the identity public keys of hidden services they are introducing. Unfortunately, I believe this sort of enumeration attack is possible with the current HS protocol, but I think proposal 224 will fix it.
This isn't about 'acceptable usage of Tor', this is necessary compromise to limit exit operators' exposure to ISP harrassment.
Even if we accept your premise that no exit operator cares about internet abuse, it's still the same thing. ISP's define what is acceptable usage of their internet connections and by implication, what is acceptable usage of the Tor exit. Tor could ignore what ISPs want (which is usually quite reasonable), but then "no Tor" clauses in ISP acceptable usage policies would just become even more prevalent.
The ability to do this implies the ability for intro points to learn the identity public keys of hidden services they are introducing. Unfortunately, I believe this sort of enumeration attack is possible with the current HS protocol, but I think proposal 224 will fix it.
It is currently possible and I am aware of proposal 224, which is why I'm bringing this up now. I don't think this is something that should be fixed without a *lot* of thought given to the consequences. I am by the way quite aware of all the counter arguments already, but someone has to play the devil's advocate here.
One of my first concerns would be that this would build in a very easy way for a government (probably the US government) to compel Tor to add in a line of code that says "If it's this hidden service key, block access."
After all - it's a stretch to say "You must modify your software to support blocking things"[0] but it's not so much a stretch to say "You already have the code written to block access to things, block access to this thing."
-tom
[0] The OnStar legal case notwithstanding
One of my first concerns would be that this would build in a very easy way for a government (probably the US government) to compel Tor to add in a line of code that says "If it's this hidden service key, block access."
And people who run Tor could easily take it out again, what with it being open source and all.
After all - it's a stretch to say "You must modify your software to support blocking things"[0]
I don't believe it's a stretch. If I did, perhaps I wouldn't bring the topic up.
Judges and lawmakers care very little about the (in their eyes) minor distinction between "the code to do this wasn't written yet" and "the code to do this wasn't configured yet". For example, look at the EU right to be forgotten ruling. The fact that no infrastructure existed to sift through tens of thousands of vague requests for search results to be removed didn't faze the court one bit, nor did the massive size of the project that resulted. They simply interpreted the (vague, poor) law put in front of them.
Regardless, even if there is such a difference, jurisdiction would still have the same effect as today. If there's even one relay that supports introductions to a HS then the protocol would still technically work, but operators in regions where the government proved unfavourable would be protected and still able to operate.
Additionally, in the absence of government coercion, the Tor relay community would then be able to collectively decide if they really want to pay for the privilege of giving bandwidth to botnet and ransomware operators.
On Mon, 2014-07-21 at 11:48 +0200, Mike Hearn wrote:
One of my first concerns would be that this would build in a very easy way for a government (probably the US government) to compel Tor to add in a line of code that says "If it's this hidden service key, block access."
And people who run Tor could easily take it out again, what with it being open source and all.
You're an intelligent person and probably know that it's more complicated than that. Any automatically updating mechanism to retrieve the Hidden Service Censorship List is a massive attack vector, because two clients having two different sets of introduction points for a hidden service, or two hidden services having different sets of introduction points available, causes a partition in the anonymity set.
Regardless of the moral arguments you put forward, which I will not comment on, it seems like this idea would never be implemented because none of the Tor developers have a desire to implement such a dangerous feature.
If you've already thought of this, as you implied in another email, why bring it up? Do you think you'll get the Tor community to agree to enable such a damaging attack?
Further, why do you think such infrastructure would be remotely successful in stopping botnets from using the Tor network? A botnet could just generate a thousand hidden service keys and cycle through them.
So, this would be:
* Socially damaging, because it would fly in the face of Tor's anti-censorship messaging * Technically damaging, because it would enable the worst class of attacks by allowing attackers to pick arbitrary introduction points * Not technically helpful against botnets, because they can just cycle keys * Not even technically helpful against other content, because they can change addresses faster than volunteers maintaining lists of all the CP onionsites can do the detective work (which you assume people will want to do, and do rapidly enough that this will be useful)
Let's skip all the "devil's advocate" discussion. It isn't useful and it'll cause traffic on this thread to blow up more than it already has.
Instead, why don't you just present the strongest counterarguments you've thought of against this proposal, which surely include the above, and then the strongest counterarguments to those arguments, which justify your position and have caused you, as an intelligent person, bearing all those negative effects in mind, to *still* hold this position.
Regardless of the moral arguments you put forward, which I will not comment on, it seems like this idea would never be implemented because none of the Tor developers have a desire to implement such a dangerous feature.
I can argue that the lack of it is also dangerous, actually. It amounts to a form of "pick your poison".
Consider exit policies. Would Tor be better off if all relays were also required to exit all traffic? I think it's obvious the answer is no because there are currently ~5800 relays and ~1000 exits according to the lists from torstatus.blutmagie.de, so most Tor relay operators choose not to exit. If they didn't have that choice, there'd almost certainly be far fewer relays. Allowing relays to contribute as much as they feel comfortable with (or that their ISP feels comfortable with) helps the project a lot.
Tor is not a large network. It's a niche product that routinely sacrifices usability for better anonymity, and as a result is politically vulnerable. I don't want Tor to be vulnerable, I think it's a useful piece of infrastructure that will be critical for improving the surveillance situation. Regardless, "anonymity loves company" and Tor has little. By demanding everyone who takes part support all uses of Tor simultaneously, including the obviously bad ones, you ensure some people will decide not to do so, reducing the company you have and thus making it easier for politicians/regulators/others to target the network.
The above argument is general - it would also apply to giving end users different tradeoffs in the TBB, for example, a mode designed for pseudonymity rather than anonymity that doesn't clear cookies at the end of the session. Then it'd be more convenient for users who don't mind if the services they use can correlate data across their chosen username, they just want a hidden IP address. Same logic applies - the more people use Tor, the safer it is.
It may appear that because Tor has been around for some years and has not encountered any real political resistance that it will always be like this. Unfortunately I don't think that's a safe assumption, at least not any more. Strong end to end crypto apps that actually achieve viral growth and large social impact are vanishingly rare. Skype was one example until they got forced to undo it by introducing back doors. The Silk Road was another. The combination of Bitcoin and Tor is very powerful. We see this not only with black markets but also Cryptolocker, which appears to be the "perfect crime" (there are no obvious fixes). So times have changed and the risk of Tor coming to the attention of TPTB is much higher now.
The best fixes for this are:
1. Allow people to explicitly take action against abuse of their own nodes, so they have a plausible answer when being visited individually.
2. Grow usage and size as much/as fast as possible, to maximise democratic immunity. Uber is a case study of this strategy right now.
The absence of (1) means it'll be much more tempting for governments to decide that all Tor users should be treated as a group.
Further, why do you think such infrastructure would be remotely successful in stopping botnets from using the Tor network? A botnet could just generate a thousand hidden service keys and cycle through them.
That's a technique that's been used with regular DNS, and beaten before (DGA). The bot gets reverse engineered to find the iteration function and the domain names/keys eventually get sinkholed. There are counter measures and counter-countermeasures, as always.
But yes, some types of abusers are harder to deal with than others, that's for sure. If it helps, s/botnet/ransomware/. The same arguments apply. I don't want to dwell on just botnet controllers.
With respect to your specific counter-arguments:
So, this would be:
* Socially damaging, because it would fly in the face of Tor's anti-censorship messaging
That seems like a risky argument to me - it's too easy for someone to flip it around by pointing out all the extremely nasty and socially damaging services that Tor currently protects. If you're going to talk about social damage you need answers for why HS policies would be more damaging than those things.
Also, the Tor home page doesn't prominently mention anti-censorship anywhere, it talks about preserving privacy. If you wanted to build a system that's primarily about resisting censorship of data it would look more like Freenet than hidden services (which can be censored in a way using DoS attacks and the like).
* Technically damaging, because it would enable the worst class of attacks by allowing attackers to pick arbitrary introduction points
Who are the attackers, in this case, and how do they force a selection of introduction points? Let's say Snowden sets up a blog as a hidden service. It appears in nobodies policies, because everyone agrees that this is a website worth hiding.
If the attacker is the NSA, what do they do next?
* Not even technically helpful against other content, because they can change addresses faster than volunteers maintaining lists of all the CP onionsites can do the detective work (which you assume people will want to do, and do rapidly enough that this will be useful)
I didn't assume that, actually, I assumed that being able to set policies over the use of their own bandwidth would encourage people to contribute more - seems a safe assumption. You don't need perfection to achieve that outcome.
But regardless, changing an onion address is no different to changing a website address. It's not sufficient just to change it. Your visitors have to know what the new address is. You're an intelligent guy so I'm sure you see why this matters.
(Aside: I think this thread is unrelated enough to tor-dev at this point that I'm going to make this my last reply.)
On Tue, 2014-07-22 at 14:42 +0200, Mike Hearn wrote:
Regardless of the moral arguments you put forward, which I will not comment on, it seems like this idea would never be implemented because none of the Tor developers have a desire to implement such a dangerous feature.
I can argue that the lack of it is also dangerous, actually. It amounts to a form of "pick your poison".
Consider exit policies. Would Tor be better off if all relays were also required to exit all traffic? I think it's obvious the answer is no because there are currently ~5800 relays and ~1000 exits according to the lists from torstatus.blutmagie.de, so most Tor relay operators choose not to exit. If they didn't have that choice, there'd almost certainly be far fewer relays. Allowing relays to contribute as much as they feel comfortable with (or that their ISP feels comfortable with) helps the project a lot.
Well, Tor would be more anonymous if there were no exit policies, so yes, Tor would be better without exit policies. People closer than I to the Tor Project have said as much elsewhere in this thread.
Tor is not a large network. It's a niche product that routinely sacrifices usability for better anonymity, and as a result is politically vulnerable. I don't want Tor to be vulnerable, I think it's a useful piece of infrastructure that will be critical for improving the surveillance situation. Regardless, "anonymity loves company" and Tor has little. By demanding everyone who takes part support all uses of Tor simultaneously, including the obviously bad ones, you ensure some people will decide not to do so, reducing the company you have and thus making it easier for politicians/regulators/others to target the network.
This is not a security argument, it is a political argument. I notice that you don't ever, in fact, address the fact that your suggestion can be used to partition the network for clients.
There are other political responses to this argument. The most common one is to point out that all taxpayers support all uses of roads simultaneously, including the obviously bad ones. There are existing legal mechanisms in place without having people withhold tax dollars from roads that they feel primarily are "bad." If this were to happen in the United States, for example, one could imagine a host of negative social consequences, like roads to mosques or primarily black/hispanic/jewish communities being ill-funded.
The above argument is general - it would also apply to giving end users different tradeoffs in the TBB, for example, a mode designed for pseudonymity rather than anonymity that doesn't clear cookies at the end of the session. Then it'd be more convenient for users who don't mind if the services they use can correlate data across their chosen username, they just want a hidden IP address. Same logic applies - the more people use Tor, the safer it is.
Tor's refusal to sacrifice security is a fairly mundane example of consequentialist thinking. The consequence of a user having to log in to Gmail twice after closing TBB are pretty minimal. The consequence of a user accidentally downloading TBB Lite and getting shot are pretty severe.
Your proposal has a similar trade-off. You have to argue that the social benefit to Tor outweighs the potential for the attack that it enables. You've yet to clearly do this; so far you've just restated your point that there are bad things on Tor and that it would be good to fight them by any means necessary.
It may appear that because Tor has been around for some years and has not encountered any real political resistance that it will always be like this. Unfortunately I don't think that's a safe assumption, at least not any more. Strong end to end crypto apps that actually achieve viral growth and large social impact are vanishingly rare. Skype was one example until they got forced to undo it by introducing back doors. The Silk Road was another. The combination of Bitcoin and Tor is very powerful. We see this not only with black markets but also Cryptolocker, which appears to be the "perfect crime" (there are no obvious fixes). So times have changed and the risk of Tor coming to the attention of TPTB is much higher now.
I don't see the logic here. Tor faces both extreme political repression already, and is strikingly different from Microsoft's handing of Skype (which was rearchitected because supernodes made for bad UX and didn't scale), and The Silk Road (which was illegal from the start, and was rapidly replaced with several other marketplaces).
The best fixes for this are: 1. Allow people to explicitly take action against abuse of their own nodes, so they have a plausible answer when being visited individually.
2. Grow usage and size as much/as fast as possible, to maximise democratic immunity. Uber is a case study of this strategy right now.
The absence of (1) means it'll be much more tempting for governments to decide that all Tor users should be treated as a group.
This is already happening. We live in that world now. We can't go back.
Right now, by the way, the plausible answer is "it's impossible for me to filter out certain kinds of communication." In spite of that Tor is legal in all of the world that cares about such legal handwaving. In the parts of the world where Tor is truly dangerous, no amount of "oh okay I'll block that hidden service" will save you.
It seems better to evangelize Tor and bring about #2 than to torpedo Tor's primary use case by introducing a censorship mechanism.
Further, why do you think such infrastructure would be remotely successful in stopping botnets from using the Tor network? A botnet could just generate a thousand hidden service keys and cycle through them.
That's a technique that's been used with regular DNS, and beaten before (DGA). The bot gets reverse engineered to find the iteration function and the domain names/keys eventually get sinkholed. There are counter measures and counter-countermeasures, as always.
Is it really productive to damage Tor's primary value proposition (strong anonymity) in order to take one more step in an arms race?
But yes, some types of abusers are harder to deal with than others, that's for sure. If it helps, s/botnet/ransomware/. The same arguments apply. I don't want to dwell on just botnet controllers.
Ransomware existed before Tor, and it would continue to exist after this point. Any botnet or ransomware operator could just have bots host safe introduction points. They'd be less anonymous, but I bet they wouldn't care.
With respect to your specific counter-arguments:
So, this would be: * Socially damaging, because it would fly in the face of Tor's anti-censorship messaging
That seems like a risky argument to me - it's too easy for someone to flip it around by pointing out all the extremely nasty and socially damaging services that Tor currently protects. If you're going to talk about social damage you need answers for why HS policies would be more damaging than those things.
See above re: consequentialism, roads, etc. This is not a new concept.
Also, the Tor home page doesn't prominently mention anti-censorship anywhere, it talks about preserving privacy. If you wanted to build a system that's primarily about resisting censorship of data it would look more like Freenet than hidden services (which can be censored in a way using DoS attacks and the like).
Tor is used and promoted as an anti-censorship tool. That is what the bridge feature is primarily used for: evading censorship. If you google "censorship circumvention" Tor is named in the wikipedia page that is the first result, and the third result is Whonix.
* Technically damaging, because it would enable the worst class of attacks by allowing attackers to pick arbitrary introduction points
Who are the attackers, in this case, and how do they force a selection of introduction points? Let's say Snowden sets up a blog as a hidden service. It appears in nobodies policies, because everyone agrees that this is a website worth hiding.
If the attacker is the NSA, what do they do next?
They inject it into people's policies after compromising a connection to any directory server. Possibly one distributed with a backdoored TBB (which is as secure as your initial connection to torproject.org).
Do you really think that if you set up a censorship system, it's not going to increase attack surface? Any crypto scheme you can devise will not stand up against a motivated attacker on a long enough timeline.
* Not even technically helpful against other content, because they can change addresses faster than volunteers maintaining lists of all the CP onionsites can do the detective work (which you assume people will want to do, and do rapidly enough that this will be useful)
I didn't assume that, actually, I assumed that being able to set policies over the use of their own bandwidth would encourage people to contribute more - seems a safe assumption. You don't need perfection to achieve that outcome.
You do need perfection for all of the social arguments you're making. You've put this forward as a way for the Tor Project to deflect the bad PR of "bad content" on hidden services, but in order for that to happen the technique needs to *actually work* at reducing the amount of bad traffic. People don't make decisions rationally and aren't going to go from opposing Tor because bad people use it to supporting Tor because *they don't personally help* the bad people use it. It's bad enough that Tor is associated with bad people. This is called the halo/horns effect in the bias literature; it is very well studied.
But regardless, changing an onion address is no different to changing a website address. It's not sufficient just to change it. Your visitors have to know what the new address is. You're an intelligent guy so I'm sure you see why this matters.
These sites *already* change their addresses all the time. User experience and retention isn't their biggest concern.
(Aside: I think this thread is unrelated enough to tor-dev at this point that I'm going to make this my last reply.)
That's too bad - I was only answering questions you posed yourself. Happy to continue debating off list. Still, I think discussion of features that could increase usage are on topic. There's a similar thread about creating social rewards for relay operators after all.
Re: technological attacks/partitioning. I did not respond to this because I didn't understand the attack you're proposing, that's why I asked for a step-by-step example against a hypothetical Snowden blog. But your answer starts with "first, you break Tor's security". That's not something that HS policies makes newly possible. If you can pwn the users TBB download or impersonate the directory authorities you win no matter what: HS policies are irrelevant.
I don't think there's any new technical attack HS policies would open up, if they were done in the same way as exit policies. From the perspective of an HS trying to initialise, it'd just be equivalent to having a smaller network. As you already said you'd happily sacrifice the ~5000 nodes that don't exit traffic because they're harming Tor's anonymity, presumably a smaller network isn't a big deal for you?
If there's a specific technical attack that doesn't rely on general attacks against Tor, I'm still keen to hear a step by step example of how to do it.
Re: politics. Yes it's a largely political argument. That's fine: Tor is a political animal, it has got a lot of funding from organisations with explicitly political agendas, the "who uses tor" section on the front page is full of characters with political goals like activists and whistleblowers. Tor does not exist independent of politics - politics should inform its technical design decisions (and does already).
Re: TBB. The consequence of TBB not having any setting below "extreme" is not at all minimal, as you claim, it's a probably severe reduction in usage that could insulate Tor against political pressure. I claim this because in my former job I saw the different usage levels of HotSpot Shield vs Tor. Yes, for the small number of users who might get shot they need and should have that hard core, no compromises mode. For everyone else who would like some additional privacy but who isn't worried about getting shot, the consequence of Tor's current approach is that they just don't use Tor.
The same is true of other functions, like running a relay. Having knobs people can tweak is not weakness. It's acceptance of the fact that not *everyone* who wants to have privacy is Tank Man, and not *everyone* who wants to contribute to privacy feels ransomware/revenge porn sites are as worthy of protection as newspaper dropboxes.
I've blocked Mike's known nodes from my configs as I simply do not agree with his apparent ethos in this regard. That being themes of censorship, policing, etc. It's better individuals decide for themselves, or upon peer input, than upon hard forms of tracking prevail. There is a lot of oppurtunity and in fact effecting the desired outcomes in permitting such freedoms. But only misunderstanding and debilitating hatred and lack of progress of society in such proposed controls as Mike hints at from time to time. In example via forum, the power of interpretation is upon the consumer, and is received by the publisher. There is feedback there such that the Mike's of the world need not be entertained.
On Sun, Jul 20, 2014 at 6:34 PM, Mike Hearn mike@plan99.net wrote:
Hello,
As we know, hidden services can be useful for all kinds of legitimate things (Pond's usage is particularly interesting), however they do also sometimes get used by botnets and other problematic things.
Tor provides exit policies to let exit relay operators restrict traffic they consider to be unwanted or abusive. In this way a kind of international group consensus emerges about what is and is not acceptable usage of Tor. For instance, SMTP out is widely restricted.
Has there been any discussion of implementing similar controls for hidden services, where relays would refuse to act as introduction points for hidden services that match certain criteria e.g. have a particular key, or whose key appears in a list downloaded occasionally via Tor itself. In this way relay operators could avoid their resources being used for establishing communication with botnet CnC servers.
Obviously such a scheme would require a protocol and client upgrade to avoid nodes building circuits to relays that then refuse to introduce.
The downside is additional complexity. The upside is potentially recruiting new relay operators.
HS's will just change their HS keys out from under your list. Then it becomes whack a mole. And you'll also be taking out shared services with the bathwater. Are you funding maintenance of that list? Ready to be called a censor when you exceed your noble intent as all have done before? And to be ignored by those operators who don't care to subscribe to your censor list thus nullifying your efforts (not least of why because it may be illegal for them to look at services on the list to verify it, or to look at and make decisions regarding content of traffic that transits them). And ignored by botnet ops who will surely run their own relays and internal pathing.
On 07/21/2014 12:34 AM, Mike Hearn wrote:
Tor provides exit policies to let exit relay operators restrict traffic they consider to be unwanted or abusive. In this way a kind of international group consensus emerges about what is and is not acceptable usage of Tor. For instance, SMTP out is widely restricted.
As Andrea said, the exit policies are there mostly to have a small knob to stop complaints.
In that sense, participation as a hidden service is "opt-in": You're willing to lose the ability to use IP address as a rough method of identifying users.
A network provider should in an ideal world _never_ [be able to] interfere with any of the traffic they transport. I already feel very uncomfortable limiting "arbitrary" destinations based on IP and port. A network provider is a neutral channel. Remember, data payload is just protocol overhead.