[redirecting from the original tor-project@ post to here]
On Thu, Apr 09, 2020 at 11:34:29AM -0700, Philipp Winter wrote:
> == Reading group ==
>
> * We will discuss "SymTCP: Eluding Stateful Deep Packet Inspectionwith Automated Discrepancy Discovery" on April 16
> * https://censorbib.nymity.ch/#Wang2020a
> * Questions to ask and goals to have:
> * What aspects of the paper are questionable?
> * Are there immediate actions we can take based on this work?
> * Are there long-term actions we can take based on this work?
> * Is there future work that we want to call out, in hopes that others will pick it up?
Thanks Philipp. For those who like more structure for your readings,
here are three Tor-oriented "homework" questions for you to think about
while you're reading this paper:
(1) Do any of the tricks that they come up with work, or help us find
tricks that work, from entirely user space, like Tor or Tor Browser? If
we can get past the initial "no, they're all packet-based tricks, so
we'd need root at the least" to finding some that can be done with
socket functions or the like, that would be really neat.
(2) Are there promising techniques that only require root-level changes
on the client side? In particular, could Tails ship with some kernel or
iptables changes that make DPI engines not trigger on bridge handshakes?
(Doing it to protect Tor handshakes for relays or popular bridges seems
like a much harder problem, because if the censor blackholes the IP
address when they see a verboten handshake, then *everybody* needs to
talk to it in a safe way, or somebody else will get it blocked and now
you can't use it either.)
(3) How about only server-side? I'm imagining asking Linux bridge
operators to run a few iptables rules to make their bridges more robust.
(Like the old "drop the first 3 syn packets in a flow, because the GFW
network stack is optimized to only send 3, but real OSes try more than
3 times" trick.)
And for extra credit: can anything be done on Android or iOS? Or maybe
better: what would it take to apply these ideas to Android Tor Browser /
Orbot users, or to iOS Onion Browser users?
Thanks!
--Roger
I made a post about the DNS tunnel I have been working on. It uses a DNS
over HTTPS or DNS over TLS resolver for covertness, and the interior of
the tunnel follows the Turbo Tunnel design so the peers can be more free
about when they send to each other.
https://github.com/net4people/bbs/issues/30
It doesn't exist as a proper pluggable transport, but it's pretty easy
to hack together a way to access a bridge through the tunnel. I made the
linked post using Tor Browser through the DNS tunnel. It's just two
steps.
First, get the tunnel client software and run it with the proper
parameters.
git clone https://www.bamsoftware.com/git/dnstt.git
cd dnstt-client
go build
./dnstt-client -doh https://dns.google/dns-query -pubkey a8090ab2d7b918e69ed4b2340fcd9c2af33c08e3620af98fb9c6a460fb63f76d tor.rinsed-tinsel.site 127.0.0.1:7000
You can replace "https://dns.google/dns-query" with another server from
https://github.com/curl/curl/wiki/DNS-over-HTTPS#publicly-available-servers
Second, in Tor Browser, go to about:preferences#tor, select "Provide a
bridge", and enter
127.0.0.1:7000 4D6C0DF6DEC9398A4DEF07084F3CD395A96DD2AD
tor will connect to 127.0.0.1:7000 as if it were a remote bridge, but
that port actually leads through the tunnel to the ORPort of my bridge
giygas.
What I had to do to set up the server side: first I went into the DNS
configuration for my domain rinsed-tinsel.site and added the records
A tns.rinsed-tinsel.site points to 192.81.135.242
AAAA tns.rinsed-tinsel.site points to 2600:3c01::f03c:91ff:fe73:b602
NS tor.rinsed-tinsel.site is managed by tns.rinsed-tinsel.site
The A and AAAA records are the IP addresses of my bridge. Then I ran the
following server commands (plus port forwarding for port 53). Notice
that the tunnel server is configured to terminate the tunnel at the
ORPort of the locally running tor bridge.
./dnstt-server -gen-key -privkey-file dnstt-tor.key -pubkey-file dnstt-tor.pub
./dnstt-server -udp :5300 -privkey-file dnstt-tor.key tor.rinsed-tinsel.site 192.81.135.242:9001
I won't commit to running the server part of the tunnel forever, but
I'll leave it set up the way it is for a while in case you want to try
it.
Here's Matt's email on how moat works.
----- Forwarded message from Matthew Finkel <sysrqb(a)torproject.org> -----
[...]
"meek-server" is (basically) the mirror-image of "meek", except it's on
the server-side instead of the client-side - it decapsulates the request
which was encapsulated by the meek client. Usually, this is the
"pluggable transport bridge" running on some cloud VM (Google App
Engine, MS Azure, AWS, etc.). It receives the normal incoming
"transformed" traffic from the client, decodes it, and passes it onto
the tor bridge. With moat, this configuration is tweaked a bit. Instead
of running the meek-server and a tor bridge on a cloud vm, the meek
server is running on the bridgedb server (and the incoming traffic is
passed onto bridgedb rather than being passed onto the tor bridge). This
means the cloud service is only being used for its CDN capability (and
purely domain fronting), acting as a pass-through: from the client to
the CDN to bridges.torproject.org, verses how meek is usually run where
the tor bridge is running in the cloud, as well. With meek, there is an
actual reflector VM where meek-server and the tor bridge usually run.
With moat, the "reflector" is in name only, it simply associates the
incoming connection with the CDN client configuration, so the CDN knows
which client configuration it should look at for handling the traffic.
Diagram of standard meek setup (which you probably already saw):
https://trac.torproject.org/projects/tor/wiki/doc/meek#Overview
The description of how this works is in this ticket:
https://trac.torproject.org/projects/tor/ticket/16650
The general flow is:
- Tor Browser's Tor Launcher creates HTTP request for https://bridges.torproject.org/moat
- Tor Launcher starts a meek-client as a local proxy and pass in this request
- The meek-client establishes a TLS connection with CDN (client -> https://ajax.aspnetcdn.com)
- The meek-client then sends HTTP requests within this TLS tunnel
where the host header is https://onion.azureedge.net and the body
of the encapsulated request is the TCP stream for the HTTP request
from Tor Launcher
- This allows for creating the end-to-end TLS connection between
Tor Launcher and BridgeDB
- All traffic sent to the CDN is then proxied to https://bridges.torproject.org/meek
(client -> CDN -> https://bridges.torproject.org/meek)
- On the bridgedb server, there is an Apache webserver running that
looks for incoming requests for /meek and redirects them to the
meek-server running locally
(client ->
CDN ->
https://bridges.torproject.org/meek ->
meek-server)
- The meek-server takes the incoming connections and decapsulates the
enclosed request and passes it back to the Apache webserver (where
the enclosed request is the TLS connection for https://bridges.torproject.org/moat)
(client ->
CDN ->
https://bridges.torproject.org/meek ->
meek-server (local) ->
https://bridges.torproject.org/moat)
- The Apache webserver then passes this request to BridgeDB for
handling. Responses go in the reverse order.
(client ->
CDN ->
https://bridges.torproject.org/meek ->
meek-server (local) ->
https://bridges.torproject.org/moat ->
bridgedb)
The result of this is the client establishes the usual TLS connection
with the CDN (therefore using it as the domain-front), then all traffic
is passed from the CDN onto bridges.torproject.org. Within that
pass-through connection, moat establishes another end-to-end TLS
connection with bridges.torproject.org.
The ticket (29096) suggests changing how meek-server is currently run.
Right now, it's simply a (hacky) shell script. My understanding of this
is using ptadapter would simply be replacing the shell script with a
maintained python program, but it'd provide essentially the same thing.
https://gitweb.torproject.org/project/bridges/bridgedb-admin.git/tree/bin/r…
The meek-server configuration is described here (if you're configuring
it as a tor bridge).
https://trac.torproject.org/projects/tor/wiki/doc/meek#MicrosoftAzure
[...]
----- End forwarded message -----
https://gitweb.torproject.org/user/dcf/snowflake.git/log/?h=turbotunnel&id=…
These are the elements of a Turbo Tunnel implementation for Snowflake.
Turbo Tunnel is a name for overlaying an abstract, virtual session on
top of concrete, physical network connections, such that the virtual
session is not tied to any particular network connection. In Snowflake,
it solves the problem of migrating a session across multiple WebRTC
connections as temporary proxies come and go. This post is a walkthrough
of the code changes and my design decisions.
== How to try it ==
Download the branch and build it:
git remote add dcf https://git.torproject.org/user/dcf/snowflake.git
git checkout -b turbotunnel --track dcf/turbotunnel
for d in client server broker proxy-go; do (cd $d && go build); done
Run the broker (not changed in this branch):
broker/broker --disable-tls --addr 127.0.0.1:8000
Run a proxy (not changed in this branch):
proxy-go/proxy-go --broker http://127.0.0.1:8000/ --relay ws://127.0.0.1:8080/
Run the server:
tor -f torrc.server
# contents of torrc.server:
DataDirectory datadir-server
SocksPort 0
ORPort 9001
ExtORPort auto
BridgeRelay 1
AssumeReachable 1
PublishServerDescriptor 0
ServerTransportListenAddr snowflake 0.0.0.0:8080
ServerTransportPlugin snowflake exec server/server --disable-tls --log snowflake-server.log
Run the client:
tor -f torrc.client
# contents of torrc.client:
DataDirectory datadir-client
UseBridges 1
SocksPort 9250
ClientTransportPlugin snowflake exec client/client --url http://127.0.0.1:8000/ --ice stun:stun.l.google.com:19302 --log snowflake-client.log
Bridge snowflake 0.0.3.0:1
Start downloading a big file through the tor SocksPort. You will be able
to see activity in snowflake-client.log and in the output of proxy-go.
curl -x socks5://127.0.0.1:9250/ --location --speed-time 60 https://cdimage.debian.org/mirror/cdimage/archive/10.1.0/amd64/iso-cd/debia… > /dev/null
Now kill proxy-go and restart it. Wait 30 seconds for snowflake-client
to notice the proxy has disappeared. Then snowflake-client.log will say
redialing on same connection
and the download will resume. It's not curl restarting the download on a
new connection—from the perspective of curl (and tor) it's all one long
proxy connection, with a 30-second lull in the middle. Only
snowflake-client knows that there were two WebRTC connections involved.
== Introduction to code changes ==
Start by looking at the server changes:
https://gitweb.torproject.org/user/dcf/snowflake.git/diff/server/server.go?…
The first thing to notice is a kind of "inversion" of control flow.
Formerly, the ServeHTTP function accepted WebSocket connections and
connected each one with the ORPort. There was no virtual session: each
WebSocket connection corresponded to exactly one client session. Now,
the main function, separately from starting the web server, starts a
virtual listener (kcp.ServeConn) that calls into a chain of
acceptSessions→acceptStreams→handleStream functions that ultimately
connects a virtual stream with the ORPort. But this virtual listener
doesn't actually open a network port, so what drives it? That's now the
sole responsibility of the ServeHTTP function. It still accepts
WebSocket connections, but it doesn't connect them directly to the
ORPort—instead, it pulls out discrete packets (encoded into the stream
using length prefixes) and feeds those packets to the virtual listener.
The glue that links the virtual listener and the ServeHTTP function is
QueuePacketConn, an abstract interface that allows the virtual listener
to send and receive packets without knowing exactly how those I/O
operations are implemented. (In this case, they're implemented by
encoding packets into WebSocket streams.)
The new control flow boils down to a simple, traditional listen/accept
loop, except that the listener doesn't interact with the network
directly, but only through the QueuePacketConn interface. The WebSocket
part of the program now only acts as a network interface that performs
I/O functions on behalf of the QueuePacketConn. In effect, we've moved
everything up one layer of abstraction: where formerly we had an HTTP
server using the operating system as a network interface, we now have a
virtual listener using the HTTP server as a network interface (which
in turn ultimately uses the operating system as the *real* network
interface).
Now look at the client changes:
https://gitweb.torproject.org/user/dcf/snowflake.git/commit/?h=turbotunnel&…
The Handler function formerly grabbed exactly one snowflake proxy
(snowflakes.Pop()) and used its WebRTC connection until it died, at
which point it would close the SOCKS connection and terminate the whole
Tor session. Now, the function creates a RedialPacketConn, an abstract
interface that grabs a snowflake proxy, uses it for as long as it lasts,
then grabs another. Each of the temporary snowflake proxies is wrapped
in an EncapsulationPacketConn to convert it from a stream-oriented
interface to a packet-oriented interface. EncapsulationPacketConn uses
the same length-prefixed protocol that the server expects. We then
create a virtual client connection (kcp.NewConn2), configured to use the
RedialPacketConn as its network interface, and open a new virtual
stream. (This sequence of calls kcp.NewConn2→sess.OpenStream corresponds
to acceptSessions→acceptStreams on the server.) We then connect
(copyLoop) the SOCKS connection and the virtual stream. The virtual
stream never touches the network directly—it interacts indirectly
through RedialPacketConn and EncapsulationPacketConn, which make use of
whatever snowflake proxy WebRTC connection happens to exist at the time.
You'll notice that before anything else, the client sends a 64-bit
ClientID. This is a random number that identifies a particular client
session, made necessary because the virtual session is not tied to an IP
4-tuple or any other network identifier. The ClientID remains the same
across all redials in one call to the Handler function. The server
parses the ClientID out of the beginning of a WebSocket stream. The
ClientID is how the server knows if it should open a new ORPort
connection or append to an existing one, and which temporary WebSocket
connections should receive packets that are addressed to a particular
client.
There's a lot of new support code in the common/encapsulation and
common/turbotunnel directories, mostly reused from my previous work in
integrating Turbo Tunnel into pluggable transports.
https://gitweb.torproject.org/user/dcf/snowflake.git/tree/common/encapsulat…
The encapsulation package provides a way of encoding a sequence of
packets into a stream. It's essentially just prefixing each packet with
its length, but it takes care to permit traffic shaping and padding to
the byte level. (The Snowflake turbotunnel branch doesn't take advantage
of the traffic-shaping and padding features.)
https://gitweb.torproject.org/user/dcf/snowflake.git/tree/common/turbotunne…https://gitweb.torproject.org/user/dcf/snowflake.git/tree/common/turbotunne…
QueuePacketConn and ClientMap are imported pretty much unchanged from
the meek implementation (https://github.com/net4people/bbs/issues/21).
Together these data structures manage queues of packets and allow you to
send and receive them using custom code. In meek it was done over raw
HTTP bodies; here it's done over WebSocket. These two interfaces are
candidates for an eventual reusable Turbo Tunnel library.
https://gitweb.torproject.org/user/dcf/snowflake.git/tree/common/turbotunne…
RedialPacketConn is adapted from clientPacketConn in the obfs4proxy
implementation (https://github.com/net4people/bbs/issues/14#issuecomment-544747519).
It's the part that uses an underlying connection for as long as it
exists, then switches to a new one. Since the obfs4proxy implementation,
I've decided that it's better to have this type use the packet-oriented
net.PacketConn as the underlying type, not the stream-oriented net.Conn.
That way, RedialPacketConn doesn't have to know details of how packet
encapsulation happens, whether by EncapsulationPacketConn or some other
way.
== Backward compatibility ==
The branch as of commit 07495371d67f914d2c828bbd3d7facc455996bd2 is not
backward compatible with the mainline Snowflake code. That's because the
server expects to find a ClientID and length-prefixed packets, and
currently deployed clients don't work that way. However, I think it will
be possible to make the server backward compatible. My plan is to
reserve a distinguished static token (64-bit value) and have the client
send that at the beginning of the stream, before its ClientID, to
indicate that it uses Turbo Tunnel features. The token will be selected
to be distinguishable from any protocol that non–Turbo Tunnel clients
might use (i.e., Tor TLS). Then, the server's ServeHTTP function can
choose one of two implementations, depending on whether it sees the
magic token or not.
If I get backward compatibility working, then we can deploy a dual-mode
bridge that is able to serve either type of client. Then I can try
making a Tor Browser build, to make the Turbo Tunnel code more
accessible for user testing.
One nice thing about all this is that it doesn't require any changes to
proxies. They remain simple dumb pipes, so we don't have to coordinate a
mass proxy upgrade.
https://gitweb.torproject.org/user/dcf/snowflake.git/tree/server/server.go?…
The branch currently lacks client geoip lookup (ExtORPort USERADDR),
because of the difficulty I have talked about before of providing an IP
address for a virtual session that is not inherently tied to any single
network connection or address. I have a plan for solving it, though; it
requires a slight breaking of abstractions. In the server, after reading
the ClientID, we can peek at the first 4 bytes of the first packet.
These 4 bytes are the KCP conversation ID (https://github.com/xtaci/kcp-go/blob/v5.5.5/kcp.go#L120),
a random number chosen by the client, serving roughly the same purpose
in KCP as our ClientID. We store a temporary mapping from the
conversation ID to the IP address of client making the WebSocket
connection. kcp-go provides a GetConv function that we can call in
handleStream, just as we're about to connect to the ORPort, to look up
the client's IP address in the mapping. The possibility of doing this is
one reason I decided to go with KCP for this implementation rather than
QUIC as I did in the meek implementation: the quic-go package doesn't
expose an accessor for the QUIC connection ID.
== Limitations ==
I'm still using the same old logic for detecting a dead proxy, 30
seconds without receiving any data. This is suboptimal for many reasons
(https://bugs.torproject.org/25429), one of which is that when your
proxy dies, you have to wait at least 30 seconds until the connection
becomes useful again. That's why I had to use "--speed-time 60" in the
curl command above; curl has a default idle timeout of 30 seconds, which
would cause it to give up just as a new proxy was becoming available.
I think we can ultimately do a lot better, and make better use of the
available proxy capacity. I'm thinking of "striping" packets across
multiple snowflake proxies simultaneously. This could be done in a
round-robin fashion or in a more sophisticated way (weighted by measured
per-proxy bandwidth, for example). That way, when a proxy dies, any
packets sent to it would be detected as lost (unacknowledged) by the KCP
layer, and retransmitted over a different proxy, much quicker than the
30-second timeout. The way to do this would be to replace
RedialPacketConn—which uses once connection at a time—with a
MultiplexingPacketConn, which manages a set of currently live
connections and uses all of them. I don't think it would require any
changes on the server.
But the situation in the turbotunnel branch is better than the status
quo, even without multiplexing, for two reasons. First, the connection
actually *can* recover after 30 seconds. Second, the smux layer sends
keepalives, which means that you won't discard a proxy merely because
you're temporarily idle, but only when it really stops working.
== Notes ==
https://gitweb.torproject.org/user/dcf/snowflake.git/commit/?h=turbotunnel&…
I added go.mod and go.sum files to the repo. I did this because smux
(https://github.com/xtaci/smux) has a v2 protocol that is incompatible
with the v1 protocol, and I don't know how to enforce the use of v2 in
the build other than by activating Go modules and specifying a version
in go.mod.
On 2020-01-22, I got an email from Microsoft Azure about a data breach
of customer support records. The summary is that between 2019-12-05 and
2019-12-31, some Azure customer support records were exposed and
downloadable, though they don't think any were actually downloaded. I
got an notification because they identified some of the records as
belonging to the Azure account I administer.
https://msrc-blog.microsoft.com/2020/01/22/access-misconfiguration-for-cust…https://www.zdnet.com/article/microsoft-discloses-security-breach-of-custom…https://www.reddit.com/r/AZURE/comments/esdwld/microsoft_database_containin…
The involved account is the one that used to be used for meek-azure
domain fronting, and is currently used for Snowflake rendezvous domain
fronting (using the Azure CDN). The account is no longer used for
meek-azure.
The email said I could file a support request to find out exactly what
information was exposed, so that's what I did. The data set they sent
back to me consistend of two email threads, neither one directly related
to Tor's use of Azure. One was about trying to delete a an unused VM
disk image, and one was trying to update a credit card.
I didn't find my name nor the account email address in the files.
Apparently the files that were exposed had already been processed by an
automated redactor. I see markers like "{AlphanumericPII}" and
"{Namepii}" in the files, even over-redactions like
"font-family:"Times New {Namepii}"".
Since #28942, Snowflake no longer uses the go-webrtc library. (Actually
server-webrtc still uses it, but server-webrtc itself is hardly used.) I
opened a ticket (at GitHub, where go-webrtc is hosted) to discuss what
to do with it, especially by third parties who may not know that its
maintainers aren't working on it actively any more.
https://github.com/keroserene/go-webrtc/issues/109