Hi Roger,
Thanks for the comments! And apologies for the insane delay.. Please see below. In summary, we're working on making the BGP monitoring data publicly available - (1) providing the data for people to use, and (2) providing our analytics on the data as a reference. Webpage is currently hosted here: http://raptor.princeton.edu/bgp-tor.html (we're currently solving some hosting space issue, will be constantly updating the page once we get more space)
1a. Challenges of the system (in terms of false positives, the need of a routing expert, etc.)
There are multiple phases that we want to achieve with the system: (1) first phase is simply making the BGP data available on the Tor network, which will open up the research area to the community on how to make best use of these data; (2) second, we offer our detection heuristics (as presented in the paper) as a resource on anomaly detection, and the historical alerts data will be publicly available online; (3) finally, we want to make it into a real alert system - that people can subscribe and receive alerts in real time. We agree that this will be challenging since we do not want to overwhelm users with a large amount of alerts.
-> Lower the alert rate and analyze alerts The best thing is always to ask the network operators that are involved in the alerts - they know the best whether they’re truly under attack, or if it’s due to some internal configurations or agreements, which may not be public information. Many previous monitoring systems (cite here) involved this kind of “input” from network operators themselves. However, Tor relay operators are not equal to network operators - from our interactions with some relay operators, we learned that many purchased services from ISPs and thus do not have control over the AS/IP space of the relays. In this case, it may be hard to get accurate input from relay operators since their knowledge is limited regarding the network configurations of their providers. Thus, we think for the relay operators who actually also operate the network themselves, it’ll be the best to get input from them (e.g., they can specify “rules” a priori to reduce false positives, and the "rules" can also help automate analysis on alerts); for others with limited information about the network they’re on, then we’ll have to resort to human expert to analyze the routing data. In terms of identifying such an expert, we don’t really have a good answer. But the good thing is that the Tor network is at a relatively manageable size (in terms of number of ASes, etc.) compared to the rest of the Internet. Also, we can potentially have different levels of alerts too to make sure people pay enough attention to the highly suspicious ones while not entirely missing out the mildly suspicious ones. In terms of how to determine the “level”, we have some thoughts such as incorporating more history data as well as inferred business relationships, but these need to be verified with more data analysis.
-> More data on the false positive rates The paper presented data up to May 2016. We will be constantly updating this webpage with more data since then. (http://raptor.princeton.edu/bgp-tor.html)
1b. How does your live-BGP-feed-anomaly-detector compare (either in design, or in closeness to actually being usable ;) to the one Micah Sherr was working on from their PETS 2016 paper?
The PETS 2016 paper adopts a data-plane approach which requires some active traceroute measurements, while we use a control-plane approach that's passive. We think the two approaches can be a good complement to each other.
1c. Your paper suggests that an alert from a potential hijack attempt could make clients abandon the guard for a while, to keep clients safe from hijack attempts. What about second-order effects of such a design, where the attacker's *goal* is to get clients to abandon a guard, so they add some sketchy routes somewhere to trigger an alert? Specifically, how much easier is it to add sketchy routes that make it look like somebody is attempting an attack, compared to actually succeeding at hijacking traffic?
To "add sketchy routes", the attackers need to make some routing announcement, and the announcement will affect certain parts of the Internet (big or small). (Note, not all announcements will necessarily have an effect, for example, less-specific announcements usually won't be used, so our system doesn't focus on the less-specific ones.)
The next question is - what's the goal of the attackers? Forcing the clients to choose a new guard or actually deanonymizing the clients? Could be either. Say, an attacker simply announces the prefix covering the guard relay - this will blackhole all the traffic going to the guard, so the connections between clients and the guard will be terminated at some point, and the clients will have to choose a new guard. So the goal of the attacks here could just be making the guard unusable (as opposed to deanonymization, which requires more sophisticated attacks and more work from the attackers). And this could really be happening without being noticed, especially if the attack announcement only affects a small part of the Internet and lasts for a short amount of time. For deanonymization, the attack needs to last longer (from Raptor paper, at least 5 minutes for a decent accuracy rate). Thus, our current main goal is simply forcing the attackers to launch attack in a public viewable way as opposed to stealthy (like in the hijack example), and thus increase routing transparency in the Tor network. The next goal will be to couple the alert system with client behavior - and yes, this can be tricky, and we currently don't have a good answer for this.
Yixin
----- Original Message ----- From: "Roger Dingledine" arma@mit.edu To: "mittal prateek" mittal.prateek@gmail.com, "Yixin Sun" yixins@CS.Princeton.EDU, tor-dev@lists.torproject.org Sent: Monday, April 17, 2017 3:04:33 PM Subject: Thoughts on the Counter-RAPTOR paper
Hi Prateek, Yixin, (and please involve your other authors as you like),
(I'm including tor-dev here too so other Tor people can follow along, and maybe even get involved in the research or the discussion.)
I looked through "Counter-RAPTOR: Safeguarding Tor Against Active Routing Attacks": https://arxiv.org/abs/1704.00843
For the tl;dr for others here, the paper: a) comes up with metrics for how to measure resilience of Tor relays to BGP hijacking attacks, and then does the measurements; b) describes a way that clients can choose their guards to be less vulnerable to BGP hijacks, while also considering performance and anonymity loss when guard choice is influenced by client location; and c) builds a monitoring system that takes live BGP feeds and looks for routing table anomalies that could be hijack attempts.
Here are some hopefully useful thoughts:
-----------------------------------------------------------------------
0) Since I opted to write these thoughts in public, I should put a little note here in case any journalists run across it and wonder. Yay research! We love research on Tor -- in fact, research like this is the reason Tor is so strong. For many more details about our perspective on Tor research papers, see https://blog.torproject.org/blog/tor-heart-pets-and-privacy-research-communi...
-----------------------------------------------------------------------
1a) The "live BGP feed anomaly detection" part sounds really interesting, since in theory we could start using it really soon now. Have you continued to run it since you wrote the paper? Have you done any more recent analysis on its false positive rate since then?
I guess one of the real challenges here is that since most of the alerts are false positives, we really need a routing expert to be able to look at each alert and assess whether we should be worried about it. How hard is it to locate such an expert? Is there even such a thing as an expert in all routing tables, or do we need expertise in "what that part of the network is supposed to look like", which doesn't easily scale to the whole Internet?
Or maybe said another way, how much headway can we make on automating the analysis, to make the frequency of alerts manageable?
I ask because it's really easy to write a tool that sends a bunch of warnings, and if some of them are false positives, or heck even if they're not but we don't know how to assess how bad they really are, then all we've done is make yet another automated emailer. (We've made a set of these already, to e.g. notice when relays change their identity key a lot: https://gitweb.torproject.org/doctor.git/tree/ but often nobody can figure out whether such an anomaly is really an attack or what, so it's a constant struggle to keep the volume low enough that people don't just ignore the mails.)
The big picture question is: what steps remain from what you have now to something that we can actually use?
1b) How does your live-BGP-feed-anomaly-detector compare (either in design, or in closeness to actually being usable ;) to the one Micah Sherr was working on from their PETS 2016 paper? https://security.cs.georgetown.edu/~msherr/reviewed_abstracts.html#tor-datap...
1c) Your paper suggests that an alert from a potential hijack attempt could make clients abandon the guard for a while, to keep clients safe from hijack attempts. What about second-order effects of such a design, where the attacker's *goal* is to get clients to abandon a guard, so they add some sketchy routes somewhere to trigger an alert? Specifically, how much easier is it to add sketchy routes that make it look like somebody is attempting an attack, compared to actually succeeding at hijacking traffic?
I guess a related question (sorry for my BGP naivete) is: if we're worried about false positives in the alerts, how much authentication and/or attribution is there for sketchy routing table entries in general? Can some jerk drive up our false positive rate, by adding scary entries here and there, in a way that's sustainable? Or heck, can some jerk DDoS parts of the Internet in a way that induces routing table changes that we think look sketchy? These are not reasons to not take the first steps in the arms race, but it's good to know what the later steps might be.
-----------------------------------------------------------------------
2a) Re changing guard selection, you should check out proposal 271, which resulted in the new guard-spec.txt as of Tor 0.3.0.x: https://gitweb.torproject.org/torspec.git/tree/guard-spec.txt I don't fully understand it yet (so many things!), but I bet any future guard selection change proposal should be relative to this design.
2b) Your guard selection algorithm makes the assumption that relays with the Guard flag are the only ones worth choosing from, and then describes a way to choose from among them with different weightings. But you could take a step back, and decide that resilience to BGP hijack should be one of the factors for whether a relay gets the Guard flag in the first place.
It sounded from your analysis like some ASes, like OVH, are simply bad news for (nearly) all Tor clients. Your proposed guard selection strategy reduced, but did not eliminate, the chances that clients would get screwed by picking one of these OVH relays. And the tradeoff was that by only reducing the chances, you left the performance changes not as extreme as you might have otherwise.
How much of the scariness of a relay is a function of the location of the particular client who is considering using it, and how much is a function of the average (expected) locations of clients? That is, can we identify relays that are likely to be bad news for many different clients, and downplay their weights (or withhold the Guard flag) for everybody?
The advantage of making the same decision for all clients is that you can get rid of the "what does guard choice tell you about the client" anonymity question, which is a big win if the rest of the effects aren't too bad.
Which leads me to the next topic:
-----------------------------------------------------------------------
3) I think you're right that when analyzing a new path selection strategy, there are three big things to investigate:
a) Does the new behavior adequately accomplish the goal that made you want a new path selection strategy (in this case resilience to BGP attacks)?
b) What does the new behavior do to anonymity, both in terms of the global effect (e.g. by flattening the selection weights or by concentrating traffic in fewer relays or on fewer networks) and on the individual epistemic side (e.g. by leaking information about the user because of behavior that is a function of sensitive user details)?
c) What are the expected changes to performance, and are there particular scenarios (like high load or low load) that have higher or lower impact?
I confess that I don't really buy your analysis for 'b' or 'c' in this paper. Average change in entropy doesn't tell me whether particular user populations are especially impacted, and a tiny Shadow simulation with one particular network load and client behavior doesn't tell me whether things will or won't get much worse under other network loads or other client behavior.
I can't really fault this paper though, because the structure of an academic research paper means you can only do so much in one paper, and you did a bunch of other interesting things instead. We, the Tor research community, really need better tools for reasoning about the interaction between anonymity and performance.
In fact, there sure have been a lot of Tor path selection papers over the past decade which each invent their own ad hoc analysis approach for showing that their proposed change doesn't impact anonymity or performance "too much". Is it time for a Systemization of Knowledge paper on this area -- with the goal of coming up with best practices that future papers can use to provide more convincing analysis?
--Roger