RPW's, et al's paper was made public today, and demonstrates several practical attacks on Hidden Services. http://www.ieee-security.org/TC/SP2013/papers/4977a080.pdf
I was wondering if there were any private trac tickets, discussions, or development plans about this that might be also be made public.
-tom
Tom Ritter:
RPW's, et al's paper was made public today, and demonstrates several practical attacks on Hidden Services. http://www.ieee-security.org/TC/SP2013/papers/4977a080.pdf
Sweet. I was waiting for a public version of this to appear. It was shared with a few Tor people, but as I don't work on hidden service-related things, I had not seen it yet.
I was wondering if there were any private trac tickets, discussions, or development plans about this that might be also be made public.
There have been a few discussions on this list about making hidden service descriptors harder to harvest. Sysrqb pointed out that a lot of the ideas are captured here: https://trac.torproject.org/projects/tor/ticket/8106
As for the deanonymization attack, I think it is pretty novel in that it uses a custom traffic signature to make the attack from http://freehaven.net/anonbib/cache/hs-attack06.pdf more reliable, but otherwise that is why we introduced guard nodes.
One immediate thought: what about changing hidden services to maintain a small pool of service-side rend circuits were reused for as long as possible (perhaps until they simply failed)? This would give a similar effect to multiple layers of guard nodes, but without adding all of that complexity or extra hops.
We would have to create some logic that prevented DESTROY cells from being allowed to travel across all 6 hops in that case, though, but there are actually multiple reasons to make that change.
In general, a client shouldn't be allowed to manipulate a service's circuits' lifetimes as a general matter, nor should a service be able to manipulate a client's circuits' lifetimes.
In fact, this ability for remote manipulation of the other party's circuits caused me to decide that the Path Bias detection code should not perform accounting on these circuits, because a malicious counterparty could use that ability to cause you to erroneously lose faith in your Guard nodes.
Unfortunately this also means that if a path bias/route capture adversary can differentiate these circuit types, they could fail the ones we can't do accounting on at will.
So there's now more than one reason to change that DESTROY behavior at the very least, I think.
RPW's, et al's paper was made public today, and demonstrates several practical attacks on Hidden Services. http://www.ieee-security.org/TC/SP2013/papers/4977a080.pdf
"pg 80: Until now there have been no statistics about the number of hidden services..."
There are some... at least one current crawler project counts around 1100. I've not got far enough into the paper to see what their 'until now' means.
On Thu, May 23, 2013 at 10:18 PM, Tom Ritter tom@ritter.vg wrote:
RPW's, et al's paper was made public today, and demonstrates several practical attacks on Hidden Services. http://www.ieee-security.org/TC/SP2013/papers/4977a080.pdf
I was wondering if there were any private trac tickets, discussions, or development plans about this that might be also be made public.
Most (all AFAIK) of what we have to do about this is public already. For stuff that's already done, see tickets #8146 and #8147 and #2286 and #8273 and #8435 for stuff that's already implemented at the directory authority level to make cheap HS-targeting attacks harder, and #8207 for fixing a bug in hidden service user authentication (which is a pretty good countermeasure if you want to avoid enumeration). See #8240 for making Guard node lifetime configurable, and raising the default.
For stuff we'd still like to do, have a look at #8106 for a good crypto idea from rransom that would form the basis of a way to make service enumeration impossible, and some discussion with hyperelliptic. See #8244 for some anti-censorship ideas from arma. See #6418 for an important last step.
(These numbered tickets are all at trac.torproject.org. For example, #8106 is https://trac.torproject.org/projects/tor/ticket/8106 and #8244 is https://trac.torproject.org/projects/tor/ticket/8244 .
All of the current tickets tagged with "tor-hs" are: https://trac.torproject.org/projects/tor/query?status=accepted&status=as...
Sorry about the enormous URL.
George had a good blog post summarizing security issues and related issues with hidden services at, which should have some good opsec suggestions: https://blog.torproject.org/blog/hidden-services-need-some-love . This week, he started some discussions about migrating to future hidden service protocols on tor-dev too.
And that's what we've got now. George and Roger will probably have more thoughts; this is just me trying to do a braindump.
hth,
On 05/23/2013 07:18 PM, Tom Ritter wrote:
RPW's, et al's paper was made public today, and demonstrates several practical attacks on Hidden Services. http://www.ieee-security.org/TC/SP2013/papers/4977a080.pdf
I was wondering if there were any private trac tickets, discussions, or development plans about this that might be also be made public.
-tom
Hi, I'm writing a blog post about these new attacks and how they affect document leak services such as Strongbox (http://www.newyorker.com/strongbox/) that rely on hidden services.
Would it be fair to say that using the techniques published in this paper an attacker can deanonymize a hidden service?
Based on this thread it looks like there are several open bugs that need to be fixed to prevent these attacks. It seems to be that hidden services still have advantages to leak sites (sources are forced to use Tor, end-to-end crypto without relying on CAs), but for the time being the anonymity of the document upload server isn't one of them. Is this accurate, and is there any estimate on how long do you think this will be the case? Months, years?
On Mon, May 27, 2013 at 11:39:06AM -0700, Micah Lee wrote:
Would it be fair to say that using the techniques published in this paper an attacker can deanonymize a hidden service?
Yes, if you're willing to sustain the attack for months.
But actually, this Oakland paper you're looking at is a mash-up of two paper ideas. The first explores how to become an HSDir for a hidden service (so you can learn its address and measure its popularity), and then how to become all the HSDirs for a hidden service so you can tell users it's not there. That part is novel and neat. The second idea explores, very briefly, how guard rotation puts hidden services at risk over the course of months. Imo this second issue, which I think is the one you're interested in, is much better explored in Tariq's WPES 2012 paper: http://freehaven.net/anonbib/#wpes12-cogs and you should realize that the risk applies to all Tor users who use Tor over time and whose actions are linkable to each other (e.g. logging in to the same thing over Tor).
Based on this thread it looks like there are several open bugs that need to be fixed to prevent these attacks. It seems to be that hidden services still have advantages to leak sites (sources are forced to use Tor, end-to-end crypto without relying on CAs), but for the time being the anonymity of the document upload server isn't one of them.
It still requires a pretty serious attacker to pull this off. But it is also a realistic attack for this pretty serious attacker. I guess it depends where your bar is -- it cannot, alas, be very high at this point for a low-latency network like Tor that's still pretty small. But I think it would be incorrect to say that hidden services have "no" anonymity. (Also, as you say, anonymity for the news collection website may not be its most important security property.)
The attack to compare it to would be a network-level (AS-level or IX-level) observer who watches whatever parts of the Internet it can see, and hopes that it observes a flow between Alice (the Tor client) and one of her guards. As Alice rotates guards, both due to natural relay churn and due to guard rotation, the chance that such an attacker sees one of these flows goes up. This attack is not easy to resolve, since it has to do with Internet topology, Tor network topology, and the user and destination locations relative to these.
Hidden services do seem inherently at a disadvantage, because the attacker can dictate how often they talk to the network. Whether that disadvantage is significant depends on how pessimistic you are about the rest of the problem.
See also "Measuring the safety of the Tor network" and "Better guard rotation parameters" on http://research.torproject.org/techreports.html for further background open research questions.
Is this accurate, and is there any estimate on how long do you think this will be the case? Months, years?
Depends how we end up resolving the guard rotation issue. We should raise the guard rotation period, which will screw up load balancing (and thus performance) unless we teach clients to handle it; and we should reduce the number of guards a client uses, which will increase variance of performance, making more Tor users stuck with crappy guards and hating life.
"Sooner if you help", I think is the phrase the Debian folks use? :)
--Roger
Roger Dingledine:
On Mon, May 27, 2013 at 11:39:06AM -0700, Micah Lee wrote:
Would it be fair to say that using the techniques published in this paper an attacker can deanonymize a hidden service?
Yes, if you're willing to sustain the attack for months.
But actually, this Oakland paper you're looking at is a mash-up of two paper ideas. The first explores how to become an HSDir for a hidden service (so you can learn its address and measure its popularity), and then how to become all the HSDirs for a hidden service so you can tell users it's not there. That part is novel and neat. The second idea explores, very briefly, how guard rotation puts hidden services at risk over the course of months. Imo this second issue, which I think is the one you're interested in, is much better explored in Tariq's WPES 2012 paper: http://freehaven.net/anonbib/#wpes12-cogs and you should realize that the risk applies to all Tor users who use Tor over time and whose actions are linkable to each other (e.g. logging in to the same thing over Tor).
There was also a secondary point in the paper that we should not overlook: you can determine the Guard nodes in use for a hidden service in a very, very short period of time (apparently around hour?).
If you have any coercive ability over those Guard nodes, you can then demand that they assist you in identifying the hidden service IP.
Additionally, as far as I can see, if you can control the introduction points using the attack from the first part of the paper, you could also perform this attack against a *user* as well (which is the threat model strongbox really tries to address). A captured Introduction Point could repeatedly fail circuits, forcing the user to reconnect on new ones until their Guard node is discovered.
Of course, most users will probably give up trying to use the service long before the hour is up, but if the attack could be optimized in any other way, it could mean trouble..
Hidden services do seem inherently at a disadvantage, because the attacker can dictate how often they talk to the network. Whether that disadvantage is significant depends on how pessimistic you are about the rest of the problem.
Yes, specifically: this part of the attack is enabled because the counterparty is able to manipulate their peer's circuits, and induce them to retry their connection repeatedly on new circuits (or otherwise make a lot of new ones).
Is there a reason why services should use a fresh rend circuit for each client?
Moreover, if a circuit succeeds during building, but fails to introduce or rendezvous, why not simply try again on the same initial portion of the circuit (but using a different intro point/rend point) rather than a whole new one?
It seems to me that in general, both parties should be way more insistent on re-using circuits that they think should otherwise work, before trying to make a whole bunch of new ones (especially under conditions that can be directed/manipulated by the adversary).
"Sooner if you help", I think is the phrase the Debian folks use? :)
If nobody can think of any immediate reasons why we wouldn't want to make these changes to hidden service circuit use and lifetimes, I will go ahead and make a new ticket and start thinking about it deeper.
On 28 May 2013 16:33, Mike Perry mikeperry@torproject.org wrote:
Additionally, as far as I can see, if you can control the introduction points using the attack from the first part of the paper, you could also perform this attack against a *user* as well (which is the threat model strongbox really tries to address). A captured Introduction Point could repeatedly fail circuits, forcing the user to reconnect on new ones until their Guard node is discovered.
Of course, most users will probably give up trying to use the service long before the hour is up, but if the attack could be optimized in any other way, it could mean trouble..
They won't give up if they are irssi trying to reconnect to a server. Or a VPN trying to auto-reconnect. Or any manner of non-human auto-retrying applications talking to a Hidden Service.
-tom
Tom Ritter:
On 28 May 2013 16:33, Mike Perry mikeperry@torproject.org wrote:
Additionally, as far as I can see, if you can control the introduction points using the attack from the first part of the paper, you could also perform this attack against a *user* as well (which is the threat model strongbox really tries to address). A captured Introduction Point could repeatedly fail circuits, forcing the user to reconnect on new ones until their Guard node is discovered.
I misspoke above. While it might be possible to capture the Introduction Point using some other attack, the more direct route to attack clients is to use the /HSDir/ nodes you control from the paper's methods, and fail client circuits who are asking for the HSdesc you're interested in.
In that case, it would take about an hour to locate the Guard nodes of persistent clients, and then you would have to coerce the Guard nodes into surveilling further (or just giving you their identity key, so you can MITM their TLS connections remotely without their further assistance or knowledge).
Still, less practical than attacking the service side unless you have a client that continues to connect to the target service for long enough for you to find the Guard, compromise it, and then watch their traffic.
Of course, most users will probably give up trying to use the service long before the hour is up, but if the attack could be optimized in any other way, it could mean trouble..
They won't give up if they are irssi trying to reconnect to a server. Or a VPN trying to auto-reconnect. Or any manner of non-human auto-retrying applications talking to a Hidden Service.
Absolutely correct. Hopefully Strongbox doesn't keep retrying for you in the background or anything like that.
Mike Perry:
Roger Dingledine:
http://www.ieee-security.org/TC/SP2013/papers/4977a080.pdf
But actually, this Oakland paper you're looking at is a mash-up of two paper ideas. The first explores how to become an HSDir for a hidden service (so you can learn its address and measure its popularity), and then how to become all the HSDirs for a hidden service so you can tell users it's not there. That part is novel and neat. The second idea explores, very briefly, how guard rotation puts hidden services at risk over the course of months. Imo this second issue, which I think is the one you're interested in, is much better explored in Tariq's WPES 2012 paper: http://freehaven.net/anonbib/#wpes12-cogs and you should realize that the risk applies to all Tor users who use Tor over time and whose actions are linkable to each other (e.g. logging in to the same thing over Tor).
There was also a secondary point in the paper that we should not overlook: you can determine the Guard nodes in use for a hidden service in a very, very short period of time (apparently around hour?).
If you have any coercive ability over those Guard nodes, you can then demand that they assist you in identifying the hidden service IP.
Additionally, as far as I can see, if you can control the HSdir points using the attack from the first part of the paper, you could also perform this attack against a *user* as well (which is the threat model strongbox really tries to address). A captured HSDir could repeatedly fail descriptor fetches, forcing the user to reconnect on new circuits until their Guard node is discovered.
Hidden services do seem inherently at a disadvantage, because the attacker can dictate how often they talk to the network. Whether that disadvantage is significant depends on how pessimistic you are about the rest of the problem.
Yes, specifically: this part of the attack is enabled because the counterparty is able to manipulate their peer's circuits, and induce them to retry their connection repeatedly on new circuits (or otherwise make a lot of new ones).
Is there a reason why services should use a fresh rend circuit for each client?
Moreover, if a circuit succeeds during building, but fails to introduce or rendezvous, why not simply try again on the same initial portion of the circuit (but using a different intro point/rend point) rather than a whole new one?
It seems to me that in general, both parties should be way more insistent on re-using circuits that they think should otherwise work, before trying to make a whole bunch of new ones (especially under conditions that can be directed/manipulated by the adversary).
"Sooner if you help", I think is the phrase the Debian folks use? :)
If nobody can think of any immediate reasons why we wouldn't want to make these changes to hidden service circuit use and lifetimes, I will go ahead and make a new ticket and start thinking about it deeper.
https://trac.torproject.org/projects/tor/ticket/9001
After looking into how hard it would be to change the DESTROY and rend circuit use behavior (would require a network upgrade and a full protocol redesign), I think I am leaning towards the creation of a "Virtual Circuit" abstraction layer that would enforce the use of the same 3 hops even after the majority of forms of circuit failure.
This would slow discovery of the Guard node by removing the ability of the adversary to expose you to all the middle nodes in the network in a short period of time through circuit failure attacks.