-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hello,
I am jvoisin, the student who will work on
tails-server[1] during this year's GSoC;
my mentors will be intrigeri and anonym.
I already worked for Tor last year, on the
"Metadata Anonymisation Toolkit" project[2];
and I hope that I'll do as good (and even better)
than last year !
You can check my proposal[3] if you are curious/interested about
what I'm planning to do this Summer.
I'm hanging on #tor-dev with the nick jvoisin.
Have a nice day,
- - jvoisin
1. https://tails.boum.org/todo/server_edition/
2. https://mat.boum.org/
3.
https://www.google-melange.com/gsoc/proposal/review/google/gsoc2012/jvoisin…
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
iQIcBAEBAgAGBQJPl+vOAAoJEJdo/TzEiBXynroP/jgiFrf3/NgEyZXSqvlAENw9
cLrUvDl2auYq78l0cuwJXxS3qb4ypow8VfqLHhFEn9I+d3ei7l1L5ozO6QkLcjYy
RISnWcN5HFkr5VKbU3LcSWXe50WqHh2/8SvzZxraMLpLpxxrOSWT9YnUtRu/m+LR
aX5X10HXlsmTqrNEoJgUfzgePSwYUDltAHPrzI7NelznS+CyCukRpuJ/nJd3fkSR
u/xlxoZR5aX3dIHMwupUISG0B59aYYbp1XUw1rYz+/XXMxLzcKblP+UqRk3ayuMg
a2LVY1pA32wrS6Bev6rwf+7xYdbVYDpLgWkSfZIenP3q2ynv14iLkIkruQK73d4V
ibj9DydcRBOhzdOm36xqoV1ddhI6upJqxPC53dwInCYQLuZUf8mu0tvNI/yivaGA
MkvAFabPLpaqk8KNDPSoBFVgigsjJgXfHfkM3MUd8zLrHQf//Q9wYlr1MMZNNyG6
ajhZqRdFKiEUb6Mxgk3sVcIvZCTC37Kkd4RGWBcm8VMLZz/9+7346sgcx/E4yB5o
GgU0qthsWB9Gpb7+VckedQFwQ/G+Ffjd7RIV3VJuTU6+QH7NPy1wmc30bVBE0wCY
keLmWAOKVbDe5QmpZokx/j+Nx8gAah61Atq3BKFmshwI980BpCEC0KemGtEH8+oR
mywN3qE7fF18micBUVWW
=We9H
-----END PGP SIGNATURE-----
Hello.
I'm wrtiting this email to introduce myself to the tor community.
I'm new to tor, I've started to discover its community with the
GlobaLeaks project; and now, for the Google Summer of Code, I will be
involved in the APAF project.
So probably all these days I will hang around your irc channel and
this mailing list, getting known to some of the core developers.
The APAF project, as described by Arturo in
https://lists.torproject.org/pipermail/tor-dev/2012-March/003416.html
, aims to be a static file server written in python, on the same
development idea behind the tor browser bundle: being a simple and
portable executable for non-technical user.
So, the core idea is to create a python standalone .exe / .app so that
anybody can easily set up a tor hidden service and configure it (via
web browser or gui, to be defined); hoping that this will spread the
use of hidden services, which AFAIK, nowdays aren't so much used.
The most powerful concept behind this in my opinion is that, by
design, the APAF should be as modular as possible, so that any other
python application developer will be able to build its own software in
such a way that it runs as a Tor Hidden Service. "Most powerful"
becouse learning and collaboration shall be behind the opensouce idea,
which is what I hope to work for.
Deadlines for the project are described in
https://pad.riseup.net/p/APAF-timeline
Starting today, I am going to outline the design of the APAF - which
libraries it is going to use, mockups, how to be configured, and so
on. Since I will probably discuss all these choices with my mentor
Arturo on the irc channel #tor-dev, any advice is welcome :)
Regards,
--
ù
The purpose of Exit Enclaves was to allow people running a website to
make Tor users
access it without ever leaving the Tor network. This leads to the
clients having end-to-end
encryption with the target destination.
Even in previous version this had some issues, one of which was the fact
that at the first
connection the user would not be accessing the destination over a Tor
circuit if the destination
was provided in a hostname format (and not IP).
The current stable version of Tor (0.2.2.x) still supports Exit
enclaves. The new versions of Tor
(> 0.2.3.x) use a new descriptor format (microdescriptors) allow relays
to specify an Exit Enclave
policy, but clients will not use it, therefore voiding the purpose of
exit enclaving.
I believe there is the need for something similar to Tor Exit Enclaving
and the closest thing I see
fit these requirements are Tor Encrypted Services
(https://gitweb.torproject.org/torspec.git/blob/HEAD:/proposals/ideas/xxx-en…).
Encrypted Services (EC), are basically regular Tor Hidden Services, that
do not provide anonymity
for the server and gain a better performance because of this (they have
a one-hop circuit
to the RV and IP).
The problem with making Encrypted services work to replace Exit Enclaves
is that the client needs
to have a way to understand that their destination is running also as an
Encrypted Service.
In this very high level overview I don't go into very much detail of how
this system will actually
work, but I hope it will prompt some discussion on the matter.
I think this can be achieved mainly in 3 ways.
1) The client already knows all of the EC's
2) The client looks up if a destination is an EC when trying to connect
to it
3) The final hop looks up if the destination is an EC
These all have some drawbacks:
In 1) the client needs to download the full list of EC's, therefore if
the number of EC's get's
very large it will take clients much more to bootstrap and they will
need to store more data.
The good thing of this though is that the speed of connections would be
as fast as they
are at the moment as it does not require any extra connections.
In 2) the clients needs to complete an extra round-trip for every
connection. I don't think
this is a valid solution as it would degrade the quality of connections
for every user.
In 3) the final hop would do along side a normal A lookup for hostnames
a CNAME lookup (
or another special field). If it finds that such a lookup returns a
.onion address instead of
returning a RELAY_CONNECTED cell it will return a ENCRYPTED_SERVICE cell
containing
the .onion address of the target ES.
The client will then cache this address and connect to it.
This approach adds a little bit of overhead (since two DNS lookups need
to be made),
but it is still faster than 2).
It suffers from the issue of the exit node could spoof the .onion
address and redirect
the user to a malicious .onion address. This is quite a tough problem
that I am still
unsure how it could be solved. If we have support for DNSSec this issue
could be mitigated.
I would love some feedback on this topic.
- Art.
Hi,
I've been working on a small tool whose purpose is to protect bridges
from the Chinese firewall. The tool runs independently of Tor and
analyzes/rewrites SYNs and SYN/ACKs which it gets with the help of
libnetfilter_queue. It is quick and easy to set up and can be run by
bridge operators.
Basically, the tool achieves two things:
- Evading the Chinese DPI engine by rewriting the TCP window size
during the TCP handshake. This leads to a fragmented cipher list
which does not seem to be recognized by the GFC.
- Blocking scanners with two dirty hacks.
I did not have a lot of time to test it yet but I've found the window
size rewriting to be particularly effective (yet ugly). It worked with
Windows {XP, 7} and recent Linux boxes. The scanner blocking strategies
are not that effective since they imply many false positives, i.e.,
legitimate users being locked out.
Before showing this to a broader audience, I need some people looking at
the code, though. The code, just 600 lines of C, is available at:
https://github.com/NullHypothesis/brdgrd/
Cheers,
Philipp
Here is a draft of a proposal for a pluggable transport using the
WebSocket protocol. In short, WebSocket is a socket-like feature
accessible to JavaScript in newer web browsers. Here are some links
about it:
https://tools.ietf.org/html/rfc6455http://dev.w3.org/html5/websockets/https://developer.mozilla.org/en/WebSockets
WebSocket is the transport used by flash proxies (which now use
JavaScript instead of Flash). This pluggable transport is a necessary
part of the flash proxy system as it stands, because something needs
stand between a Tor client and a web browser proxy, and again between
the proxy and a Tor relay. This proposal is mainly describing what is
already implemented in
https://gitweb.torproject.org/flashproxy.git
The program connector.py is the client transport plugin, and for the
server transport plugin I'm using a program called websockify.
(websockify isn't completely satisfactory though, and replacing it is
ticket #5575.) What's implemented works well enough that I have been
using IRC over Tor over a WebSocket transport for about a week.
I want to emphasize that this proposal is not the entirety of the flash
proxy architecture, but only a part of it. I'm posting it here because
1) I want your help in getting it right, and 2) it may be useful beyond
just flash proxies.
Anyone interested in reading the proposal, I'd like to call your
attention to a few points for comment. One is that there are different,
partially incompatible, version of the WebSocket protocol. I have made
the most recent version (RFC 6455) a MUST and any earlier versions a
MAY. However it may be that in a little while browser support will be
such that there is no reason to support old versions. The other is the
base64 subprotocol used to send binary data over a text-only channel.
WebSocket has a fully binary messages, but they are not supported in
Firefox 10. This may be another thing that is changing rapidly enough to
drop, as Firefox 11 and Chrome 16 do support binary messages.
David Fifield
Hi everyone,
we have an upcoming sponsor F milestone on July 1. Here's the list of
deliverables:
https://trac.torproject.org/projects/tor/wiki/org/sponsors/SponsorF/Year2#P…
We need a ticket owner for all tickets in the July 1 milestone. We also
need a schedule for each of those tickets with the next substep ideally
being due in the next 4--6 weeks.
I suggest we have an IRC meeting on
Thu Apr 12, 16:00--18:00 UTC in #tor-dev.
Here's the time and date for people not living in UTC land:
http://www.timeanddate.com/worldclock/fixedtime.html?iso=20120412T16
People who should attend are: George, Erinn, Nick, Steven, Roger, Aaron,
Jake, Linus, Sebastian, and anyone else who wants to attend.
If you have any input on the project tickets before next Thursday,
please feel free to comment on them. The more questions we solve before
the meeting, the better.
Thanks,
Karsten
Vecna [1], today published ampislay [2], an 8 years old project to
implement anonymous communication trough IP spoofing.
It's a not-so-conventional techniques, that have it's advantage and
weakness, but that maybe considered within the Tor community for some
particular use-case.
It was a gift for my 2004 birthday (thanks!!!) :-)
-naif
[1] http://www.delirandom.net
[2] https://github.com/vecna/apmislay
Greetings,
A while ago the Tor project rolled out Obfsproxy as a Browser Bundle
[1], for users behind firewalls filtering SSL or detecting other
characteristics of a Tor connection, to help them access bridges.
In our recent work, SkypeMorph [2], we have tried to use Skype video
communications as our target protocol for protocol obfuscation.
SkypeMorph functionality is similar to Obfsproxy, but the connection
between the bridge and the client looks like a Skype video call (the
details of how we do this is discussed in the technical report).
We also have an open-source proof-of-concept impelmenation of the
SkypeMorph available at: [3]
Notes:
1- At the moment our code relies on SkypeKit SDK [4] (a paid Skype SDK
which you can get for around US$ 5) for Skype functionalities (the
README file in the package explains how one can obtain SkypeKit).
However, it can be easily ported to Skype public API [5], so users would
not have to pay for it.
2- SkypeMorph and pluggable transports: Although our code can
potentially be used as a pluggable transport, there is a minor
difficulty with the pluggable transport framework that needs to be
addressed before it can host our code. As mentioned above, our code uses
Skype network for basic login stuff, so it takes a little bit more time
than what Tor expect from a typical transport (like Obfsproxy), so the
Tor client gives up building circuits after a while. We are aware of
ORControllers tricks to solve the problem, but it does not seem to be
the right way to do it and it would be awesome if the pluggable
transport were able to tell Tor that it's working on setting up the
connection, and that Tor
shouldn't give up on it until it says it's ready. I am sure other
transports could also benefit from this.
Hooman
[1]:https://blog.torproject.org/blog/obfsproxy-next-step-censorship-arms-race
[2]:http://cacr.uwaterloo.ca/techreports/2012/cacr2012-08.pdf
[3]:http://crysp.uwaterloo.ca/software/
[4]:http://developer.skype.com/public/skypekit
[5]:http://developer.skype.com/public-api-reference
Hi Tor Community,
My name is Tengey Junior Patrick, a 4th year student of the University
of Ghana, in Ghana of course. I am offering computer science and
Mathematics and am a programmer mainly in java who is very interested
in participating in GSoC 2012 with Tor.
I have read that Tor is participating in GSoC 2012. I am very enthused
about what Tor does and I can’t wait to be a part of this campaign of
improving Internet privacy and security. Especially because I have
been a victim of Internet Surveillance where my password was tempered
with so I am well motivated for this. I have been observing your
community interactions for some time now.
I have already subscribed to the tor-announce mailing list and the
tor-dev list. I have also been idling on the #tor-dev on the OFTC IRC
channel. I have gone
through the proposed students’ projects [2] and some documentations on
"Running Tor" [1] . I have downloaded and installed the stable version
of the Vidalia bundle and I am acquainting myself with its usage. I
want to know if I am doing the right things since I am new to Tor.
Also I want to know if the project on "Tor Controller Status Event
Interface for Vidalia" is still available to work on. I particularly
program in Java and have done some work on UI developements. Also I am
quite proficient in the english language and can play around with
photoshop as well. But I don’t really program in c++ though I have
learnt the basics in skul, so does that disqualify me from working on
the project. If not, what else must I know prior to the application in
order to get acquainted with your practices and better understand your
organizational set up. I will be very glad if help is given me in this
regard. Thanks a lot.
[1]https://www.torproject.org/docs/documentation.html.en
[2]https://www.torproject.org/getinvolved/volunteer.html.en#Projects
Analysis of the Relative Severity of Tagging Attacks:
Hey hey, ho ho! AES-CTR mode has got to go!
A cypherpunk riot brought to you by:
The 23 Raccoons
Abstract
Gather round the dumpster, humans. It's time for your Raccoon
overlords to take you to school again.
Watch your step though: You don't want to catch any brain parasites[0].
Introduction
For those of you who do not remember me from last time, about 4 years
ago I demonstrated the effect that the Base Rate Fallacy has on timing
attacks[1]. While no one has disputed the math, the applicability of
my analysis to modern classifiers was questioned by George Danezis [2]
and others. However, a close look at figure 5(a) of [3] shows it to be
empirically correct[4].
Recently, Paul Syverson and I got into a disagreement over the
effectiveness of crypto-tagging attacks such as [5]. He asked me to
demonstrate that they were more powerful than active timing attacks
(which I've done in [6]), and to measure just how much more powerful
they were (which is shown in this work). At least I think that's what
he was asking. His paragraphs were very looooooong...
Anyway, out of the goodness of my little Raccoon heart, I asked my
brethren to help me complete this proof ahead of schedule. We're
worried about you guys. You be gettin sloppy with da attack analysis,
yo (brain parasites??). And when ya get sloppy, the Raccoons pick up
the scraps and multiply.
And, as you'll see below, you're not gonna like it when we multiply.
It means more work for you. (But you probably should have realized
that in the first place).
The Amplification Potential of Tagging
Crypto-tagging attacks like [5] provide for an amplification attack
that automatically boosts attack resource utilization by causing any
uncorrelated activity to immediately fail, so the attacker doesn't
have to worry about devoting resources to uncompromised traffic.
Those of you who are already familiar with [5], stay with me. The
authors of [5] apparently did not realize the amplification power of
their attack, either. Despite my teasing above, I can see why you
dismissed them initially.
The crypto-tagger achieves amplification by being destructive to a
circuit if the tagged cell is not untagged by them at the exit of the
network, and also by being destructive when a non-tagged cell is
"untagged" on a circuit coming from a non-tagging entry. It transforms
all non-colluding entrances and exits into a "half-duplex global"
adversary that works for the tagger to ensure that all traffic that he
carries goes only through his colluding nodes.
Imitating a Tagging Attack with Timing Attacks
The crux of the argument against fixing crypto-tagging attacks is that
they can be imitated by an active adversary using timing attacks.
To imitate a tagging attack, the attacker attempts to achieve circuit
killing amplification by using timing to try to determine which
circuits are not flowing to colluding nodes, and kill them.
The imitated tagging attack has two steps. First, the two colluding
endpoints correlate all candidate matches together and kill all other
circuits off. Then, they embed a more thorough active timing signature
into the remaining circuits to determine the sure matches.
We contend that this first step has very little timing information
available due to the need to close circuits before streams are opened
(which happens after just a couple cells). Certainly not enough to
establish 0-error across a large sample size. Even so, in the analysis
we'll be generous and concede a very low false positive rate could
still be possible. It turns out not to matter that much, as long as
it's non-zero.
So let's analyze each step of the imitating attack in turn.
Imitating Tagging: Circuit Killing Step
In the first pass of the imitating attack, the adversary performs an
initial correlation of new circuits, and then kills the ones that
don't correlate. So let's do the base rate analysis[1] for the
correlation, shall we?
The probability that an arbitrary pair of circuit endpoints seen
through the c/n colluding nodes belongs to the same circuit is equal
to (c/n)^2 times the probability of picking an arbitrary matching pair
of circuit endpoints out the network's 's' streams (1/s^2).
Pk(M) = (c/n)^2 * (1/s)^2
>From my previous work in [1], we have the effect of the base rate on
this attack:
Pk(M|C) = Pk(C|M)*Pk(M)/(Pk(M)*Pk(C|M) + Pk(~M)*Pk(C|~M))
For every actual match, the adversary can expect to have 1/Pk(M|C)
additional matches predicted by the correlater.
If you churn through some more analysis, you can see that the
probability Pk(~M|~C) of correctly killing non-matching circuits is
pretty high (but is still a function of c/n). In other words, the
adversary is pretty sure that the circuits he does kill are
irrelevant. Since everyone around here likes to assume the correlating
adversary is all-powerful, we doubt we need to show their strength in
this avenue. Let's just assume Pk(~M|~C) = 1, and no true matches are
killed early.
Now for the numbers. Being a Raccoon, I am limited by the precision of
my trusty rusty squirrel-skull abacus[7], so I'll give the imitating
adversary several benefits of the doubt here to keep the math more
simple. You can re-calculate at home on a high precision calculator
without these assumptions if you like.
First, let's just assume for the ease of analysis that the imitating
adversary gets to behave globally in the first step and set c=n for it
(relax Paul, this assumption is in your favor). After all, maybe the
NSA has some tricks up their sleeve with respect to global timing
analysis that we don't know about. If we don't give the imitating
adversary this bonus, the base rate just gets too small to manage and
crypto-tagging wins by a landslide because of its free "half-duplex
global" property. It would take all of the excitement right out of our
proof!
Pk(M) = (n/n)^2 * (1/s)^2 = 1/s^2
To toss the imitating adversary another bone (since they keep falling
off of my abacus anyway), and because a 0.0006 false positive rate "is
just a non-issue"[8], we'll give those chumps an extra 0. They deserve
it, they need it, and we're feeling generous. Hey, maybe they even can
successfully encode some timing information between the first two
cells on a circuit.
Pk(C|~M) = 0.00006
Pk(C|M) = 0.99994 = 99.994%
As if that weren't enough, we'll *still* use only s=5000 concurrent
streams, even though over the past 4 years of network growth, that is
now an absurdly low number.
Pk(M) = (1/5000)^2 = 4*10^-8
Plugging everything in:
Pk(M|C) = 0.99994*4*10^-8/(0.99994*4*10^-8 + (1-4*10^-8)*0.00006))
Pk(M|C) = 0.000666
1/Pk(M|C) => 1501 extra circuits survive for every true match.
The imitating adversary sure seems to be carrying a lot of extra
traffic at this point (roughly 1501 times as much as he wants), even
though we made three seriously large (to the point of being erroneous)
assumptions in his favor. Stay tuned for the exciting conclusion to
see what he'll do with it.
Imitating Tagging: Active Timing Attack Step (at 100% accuracy)
After filtering, the imitating adversary then moves on to use an
active timing attack to determine the true matches. Let's walk through
the base rate analysis to see what they will look like.
The probability of picking an arbitrary, random endpoint match is
proportional to the number of remaining endpoints, which should trend
towards the fraction of the colluding capacity times the number of
total endpoints:
Pi(M) ~= O((c/n) * (1/s^2))
Technically, there is a correction we need to do for the increased
prior probability of matches being present due to the filtering step
above, but we're going to ignore that for now, because we'll just give
the adversary 100% accuracy for this stage. We do not believe a
0-error active timing attack would survive analysis (see the Future
Work section), but Paul was quite insistent, and it also simplifies
analysis.
So here you go, Paul:
Pi(C|M) = 1
Pi(C|~M) = 0
Pi(M|C) = 1*Pi(M)/(Pi(M) + 0*(1-Pi(M))
Pi(M|c) = 1
With this level of accuracy, Pi(M) is irrelevant. The base rate loses
this one (but only because the error rate is contrived).
Now, how many of the network's total circuits does the adversary
actually compromise? Well, the adversary is carrying c/n of the
network traffic, but only Pk(M|C) of those circuits are actually valid
candidates for matching.
Of those, Pi(M|C) are discovered by the active timing attack (all of them).
Pi(compromise) = c/n * Pk(M|C) * Pi(M|C)
Pi(compromise) = c/n * 0.000666
Ok, not bad. The imitating adversary seems to beat the expected
O((c/n)^2) for end to end 0-error attacks for some values of c. So it
might be a good idea. Sometimes.
Let's check in with our crypto-tagger and see how he's doing.
Full Analysis of the Crypto-Tagging Attack
The most direct and intuitive route to calculate the base rate Pc(M)
for the crypto-tagger is through the observation that the "half-duplex
global" adversary is killing all traffic such that the all of the 's'
streams that flow through the adversary's nodes are fully compromised.
Pc(M) = (1 / ((c/n)*s))^2
Pc(M) = (n/c)^2 * (1/s)^2
Ugly looking base rate, but it doesn't matter, because the
crypto-tagger can in fact encode arbitrary bit strings in his tags
without even resorting to timing. Bit string encoding was not actually
discussed in [5], but our crack research team of 23 Raccoons doesn't
see why it isn't possible.
Therefore, the crypto-tagger's Pc(M|C) ends up 1.0. But unlike the
imitating tagger, the crypto-tagger doesn't need any gifts from
Raccoons to achieve his success rate.
Pc(C|M) = 1
Pc(C|~M) = 0
Pc(M|C) = 1*Pc(M)/(1*Pc(M) + 0*(1-Pc(M))
Pc(M|C) = 1
To calculate the probability of compromise of an arbitrary circuit
chosen from the entire network, we need to get a measure on the number
of circuits that flow through the adversary's nodes.
The most direct and intuitive way to calculate this probability is to
realize that the "half-duplex global" adversary created by the
crypto-tag ensures that all of the c/n network capacity deployed by
the attacker carries only fully compromised circuits. Therefore, the
attacker can expect to compromise c/n of the circuits on the network.
The probability of compromise network-wide is then:
Pc(compromise) = Pc(M|C) * c/n
Pc(compromise) = c/n
In other words, the attack expects to compromise (c/n)*s of the
network's total concurrent streams. So much for O((c/n)^2).
If even just one of the major exit relays became compromised or
coerced to implement a crypto-tagging attack (or hey, just did it for
the lullz!), the consequences would be devastating, and invisible to
users.
Crypto-Tagger vs Imitating Tagger
Let's compare the two probabilities of compromise:
Pi(compromise) = Pc(compromise)*Pk(M|C)
Pc(compromise) = Pi(compromise)/Pk(M|C)
Pc(compromise) = Pi(compromise)*1501
So even with a 100% accurate active timing attack and several very
liberal assumptions in favor of the imitating adversary, the
crypto-tagger compromises 1501 *times* as many circuits with the same
attack capacity. That's some nice amplification.
Moreover, the crypto-tagger has a compromise rate of c/n, which
obliterates the O((c/n)^2) expected compromise rate that c/n-carrying
adversaries are supposed to be capable of compromising.
Sounds like it's time to swap out AES-CTR in favor of a
self-authenticating cipher[9] amirite??. OCB mode, anyone?
Future work
We can further elaborate the above analysis to take in more realistic
error rates for active timing attacks. Such an exercise might be
instructive, but we believe it is not necessary to properly evaluate
imitating tagging versus crypto-tagging. It will only make the
imitating tagger look worse, and everybody should realize by now he's
just a poser anyway.
-----------------------------
0. https://en.wikipedia.org/wiki/Baylisascaris_procyonis
1. http://archives.seul.org/or/dev/Sep-2008/msg00016.html
2. https://conspicuouschatter.wordpress.com/2008/09/30/the-base-rate-fallacy-a…
3. http://www.cl.cam.ac.uk/~sjm217/papers/pet07ixanalysis.pdf
4. https://lists.torproject.org/pipermail/tor-talk/2012-March/023592.html
5. http://www.cs.uml.edu/~xinwenfu/paper/ICC08_Fu.pdf
6. https://www.eff.org/pages/tor-and-https
7. http://www.youtube.com/watch?v=ERwqbdAIY04
8. https://blog.torproject.org/blog/one-cell-enough
9. https://en.wikipedia.org/wiki/Authenticated_encryption
10. Look, I used more citations this time!