Hi folks,
Here are some brainstorming thoughts to get us started improving our "unclassifiable protocols" idea.
----------------------------------------------------------------------
For context, right now we have essentially one deployed design in this space, which is obfs4: https://gitweb.torproject.org/pluggable-transports/obfs4.git/tree/doc/obfs4-... This protocol does a short handshake (one client message, one server message) to establish session keys, then it just sends bytes back and forth with an (additional) layer of encryption so no payload bytes assist an attacker with doing content-based protocol classification.
----------------------------------------------------------------------
One of the more thorough analyses of obfs4 comes from the FTE folks: https://www.freehaven.net/anonbib/#ccs2015obfsdetection where they show a classifier that assesses entropy of the first server-side packet, plus looking at the length of that initial message, and they can consistently pick out obfs4 flows with only a 0.25% false positive rate compared to background traffic they recorded at their university.
That false positive rate is where it gets interesting: it sounds good until we realize that it means it flags 1 in every 400 flows as being obfs4, i.e. it falsely tags about 36000 flows in the 14M flow dataset. And the false positives aren't randomly distributed either: 1.22% of their SSL flows are falsely tagged as obfs4, and it's the ones where the SNI in the SSL handshake has a lot of entropy (because everything else in the SSL handshake is already random). Also, entire protocols like RTMP get caught up and tagged as obfs4, because their handshake starts out with a block of random bytes.
A key takeaway is that if they ran this classifier, with its high true positive rate and low false positive rate, on a large country's network traffic, approximately none of the flows that it would tag as obfs4 would actually be obfs4. Their Section 7 in that paper has a good discussion of the base rate issues and how they apply here.
Are there other papers that design or analyze classifiers for obfs4?
----------------------------------------------------------------------
The good news: It would seem that obfs4 is most effective in the "long tail of varied traffic" scenario. That is, considering the spectrum between "small corporate network" and "China backbone", obfs4 needs that more broad background traffic in order to make its false positives too painful to block.
The bad news: I still worry about an attack that puts together many building blocks, each of which individually is like the "1% false positive rate" classifier in the above paper, but that together drive their false positive rate low enough that blocking is safe to do. One observation there is that the more complexity there is to a protocol, the harder it is to "really" look like it, i.e. to match it in all dimensions at once.
Consider this lovingly handcrafted graph, where the X axis is how thoroughly we try to look like some expected protocol, and the Y axis is how close we can get to making the adversary unwilling to block us:
1_| |_1 | | |\ /| | \ / | | \ / | | \ / | 0_| ___________________________________________/ |_0 | |
There's a huge valley in the middle, where we're trying to look like something, but we're not doing it well at all, so there is little damage from blocking us.
The ramp on the right is the game that FTE tries to play, where in theory if they're not perfectly implementing their target protocol then the adversary can deploy a distinguisher and separate their traffic from the "real" protocol, but in practice there are enough broken and weird implementations of the protocol in the wild that even being "close enough" might cause the censor to hesitate.
And the ramp on the left is our unclassifiable traffic game, where we avoid looking like any particular expected protocol, and instead rely on the long tail of legitimate traffic to include something that we blend with.
Observation: *both* of these ramps are in the "broken in theory, but works in practice" situation. I started out thinking it was just the obfs4 one, and that the FTE one was somehow better grounded in theory. But neither side is actually trying to build the ideal perfect protocol, whatever that even is. The game in both cases is about false positives, which come from messy (i.e. hard to predict) real-world background traffic.
One research question I wrestled with while making the graph: which ramp is steeper? That is, which of these approaches is more forgiving? Does one of them have a narrower margin for error than the other, where you really have to be right up against the asymptote or it doesn't work so well?
For both approaches, their success depends on the variety of expected background traffic.
The steepness of the right-hand (look-like-something) ramp also varies greatly based on the specific protocol it's trying to look like. At first glance we might think that the more complex the protocol, the better you're going to need to be at imitating it in all dimensions. That is, the more aspects of the protocol you need to get right, the more likely you'll slip up on one of them. But competing in the other direction is: the more complex the protocol, the more broken weird implementations there could be in practice.
I raise the protocol complexity question here because I think it has something subtle and critical to do with the look-like-nothing strategy. Each dimension of the protocol represents another opportunity to deploy a classifier building block, where each classifier by itself is too risky to rely on, but the composition of these blocks produces the killer distinguisher. One of the features of the unclassifiable protocol that we need to maintain, as we explore variations of it, is the simplicity: it needs to be the case that the adversary can't string together enough building-block classifiers to reach high confidence. We need to force them into building classifiers for *every other protocol*, rather than letting them build a classifier for our protocol.
(I'll also notice that I'm mushing together the notion of protocol complexity with other notions like implementation popularity and diversity: a complex proprietary protocol with only one implementation will be no fun to imitate, but the same level of complexity where every vendor implements their own version will be much more workable.)
----------------------------------------------------------------------
I've heard two main proposed ways in which we could improve obfs in theory -- and hopefully thus in practice too:
(A) Aaron's idea of using the latest adversarial machine learning approaches to evolve a traffic transform that resists classifying. That is, play off the classifiers with our transform, in many different background traffic scenarios, such that we end up with a transform that resists classifying (low true positive and/or high false positive) in many of the scenarios.
(B) Philipp's idea from scramblesuit of having the transform be parameterized, and for each bridge we choose and stick with a given set of parameters. That way we're not generating *flows* that each aim to blend in, but rather we're generating bridges that each aim to blend differently. This diversity should adapt well to many different background traffic scenarios because in every scenario some bridges might be identified but some bridges will stay under the radar.
At first glance these two approaches look orthogonal, i.e. we can do both of them at once. For example, Aaron's approach tells us the universe of acceptable parameters, and Philipp's approach gives us diversity within that universe.
Aaron: How do we choose the parameter space for our transforms to explore? How much of that can be automated, and how much needs to be handcrafted by subject-matter-experts? I see how deep learning can produce a magic black-box classifier, but I don't yet see how that approach can present us with a magic black-box transform.
And as a last note, it would sure be nice to produce transforms that are robust relative to background traffic, i.e. to not be brittle or overfit to a particular scenario. Otherwise we're giving up one of our few advantages in the arms race, which is that right now we force the censor to characterize the traffic -- including expected future traffic! -- and assess whether it's safe to block us.
There. Hopefully some of these ideas will cause you to have better ideas. :)
--Roger
FYI, I filed a ticket to keep track of our progress towards improving obfs4: https://bugs.torproject.org/30716
On Fri, May 31, 2019 at 01:02:35AM -0400, Roger Dingledine wrote:
Here are some brainstorming thoughts to get us started improving our "unclassifiable protocols" idea.
For posterity, here are the notes I wrote for my 5-minute talk last week during the Tor All Hands, to summarize the area and this post:
-------
Today I want to talk about pluggable transports, and why they work in practice.
Big picture as a reminder: pluggable transports are ways to transform Tor traffic so it's harder to block or recognize.
Our current most popular transport is obfs4, which just adds a layer of encryption so no recognizable headers.
Works in China; also used by other projects (like VPN providers and other circumvention tools).
-------
But, *why* does it work? In theory it doesn't look like any traffic they expect: not web browsing, not messaging, not any expected protocol.
Our guess for why it works is because there's a long tail of China backbone traffic that it blends with, i.e. if their protocol classifier says "no idea", blocking all unclassified traffic is too much collateral damage.
For example, if their classifier has a 1% false positive rate, that means it would block 1 in every 100 flows -- so almost everything that it blocks would be a mistake. This is what people mean when they say "base rate fallacy".
We're turning the game around: now *they* have to decide what is the cost of blocking us.
Maybe they *could* block it, but they don't, because they're doing fine blocking our bridges by IP address, i.e. beating bridgedb? But is that true for Lantern and others too? And, what if we make bridgedb stronger?
Quote from Raytheon DPI dev: "This looks like the sort of protocol that the general's engineers deployed ten years ago after the last coup, and nobody has looked at since. If I block it, some building is going to disappear from the internet. No way am I going to touch this."
------
That is, sure on any small or medium sized network it will look funny, but on a big enough network, there are other things that look like it. E.g. RTMP starts with a randomized handshake too. Most but not all of the SSL handshake is random-looking.
But if you look at enough packets, or you look at the timing and volume, surely you can tell "obfuscated Tor" apart from other things?
So we're in this weird situation where it's broken in theory but it works in practice.
------
Compare to FTE/Marionette, which tries to look like http.
I used to think these were two different approaches to the problem: look-like-nothing vs look-like-something.
You have to *really* look like nothing, or *really* look like your target thing, and in between is this dead zone where the censor has it easy.
But, Marionette probably works in practice because it blends with the long tail of weird and buggy http implementations.
(They're never going to be fully bug compatible with Firefox and Apache.)
#1 So it's really the same game! They need a long tail too, and for them it's about false positives too.
-----
#2 Ok, how do you evaluate these things?
Get a testbed, install some background apps, compare? But it will never be enough background traffic to be "realistic"!
Get a real network, add some obfs4 to it, compare? We were on track for doing that with a university CS dept but then covid cleared out their building. Still sounds useful, because if it works or it doesn't work, either way it will tell us something about how big is big enough.
-----
#3 And then the third question is: can we come up with a theory for how to reason about the security here? Maybe that can lead to new designs that are more robust to blocking in practice too? Generative Adversarial Networks (GANs).
anti-censorship-team@lists.torproject.org