a friend and i are working on a Tor router design that doesn't compromise anonymity for convenience. [0][1][2][3][4]
we're soliciting feedback as part of a go / no-go decision on continuing this effort.
in particular, the design is intended to meet the scrutiny of Nick M., Roger, and Mike P. as the focus on support for Tor Browser and Tor on each client indicates.
---
the design and prototype code is marked "copyright Tor Project Inc. by assignment", which means that we're using a notary public to formally assign copyright ownership to the corporate entity "Tor Project, Inc.".
your comments will be taken into consideration, however, please defer patches / code contributions under other owners (you) until assignment complete.
---
last but not least, we're trying to eat our own dog food. all of our planning, development, and operations use hidden services, called Onion services in the document, and this bootstrap is more difficult than expected. [5]
please provide feedback in reply on this thread or to me directly.[6] assuming the project continues, we will have Onion services to support collaborative development up soon.
best regards, and my thanks in advance for your scrutiny!
0. "Tor Enforcing Privacy Router" http://serqet345qt265xp.onion/
1. "Op-ed: Why the entire premise of Tor-enabled routers is ridiculous" http://arstechnica.com/security/2015/04/18/op-ed-why-the-entire-premise-of-t...
2. "[tor-relays] Anonbox Project - Mike Perry" https://lists.torproject.org/pipermail/tor-relays/2014-October/005541.html
3. "[tor-relays] Anonbox Project - Roger Dingledine" https://lists.torproject.org/pipermail/tor-relays/2014-October/005544.html
4. "[tor-talk] Cloak Tor Router (thread)" https://lists.torproject.org/pipermail/tor-talk/2014-November/035436.html
5. "Onion services" came in behind "Tor sites" because sites felt too web browser focused. we're trying to avoid the legacy "hidden services" nomenclature.
6. i have a long history of extreme dislike for encrypted email, key servers, web of trust, and other moral hazards. however, if you encrypt to my key you can send private mail, if desired. note that some encrypted email clients will fail insecure if the intended recipient doesn't match a keyring identifier! https://peertech.org/keys/0x65A847E7C2B9380C-pub.txt
Hi,
coderman wrote (03 May 2015 03:37:17 GMT) :
a friend and i are working on a Tor router design that doesn't compromise anonymity for convenience. [0][1][2][3][4]
Thanks!
please provide feedback in reply on this thread or to me directly.[6]
Just to clarify, the threat model explicitly doesn't include "Attacker is able to reconfigure Tor on a client system to use an arbitrary set of bridges", right?
Cheers, -- intrigeri
On 5/3/15, intrigeri intrigeri@boum.org wrote:
... Just to clarify, the threat model explicitly doesn't include "Attacker is able to reconfigure Tor on a client system to use an arbitrary set of bridges", right?
correct.
neither bridges nor pluggable transports are supported. i have added a FAQ entry for this. thanks!
in the future, it would be useful to have a way to securely distribute bridges or obfuscated proxies to trusted user on the local network. however, this is not a trivial task, and you'd want to avoid compromising all of your bridges at once if a failure occurs.
last but not least, if your attacker is coordinating the attack over Tor, obviously this cannot be thwarted at the local network level by a Tor router device. host security is critical, even with a Tor enforcing router as backup. that's a longer subject i need to think about more before writing anything useful.
best regards,
coderman:
On 5/3/15, intrigeri intrigeri@boum.org wrote:
... Just to clarify, the threat model explicitly doesn't include "Attacker is able to reconfigure Tor on a client system to use an arbitrary set of bridges", right?
correct.
neither bridges nor pluggable transports are supported. i have added a FAQ entry for this. thanks!
in the future, it would be useful to have a way to securely distribute bridges or obfuscated proxies to trusted user on the local network. however, this is not a trivial task, and you'd want to avoid compromising all of your bridges at once if a failure occurs.
last but not least, if your attacker is coordinating the attack over Tor, obviously this cannot be thwarted at the local network level by a Tor router device. host security is critical, even with a Tor enforcing router as backup. that's a longer subject i need to think about more before writing anything useful.
Well, there is a way to restrict the capabilities of such an adversary quite severely.
In my opinion, the most interesting use case for these devices is where Tor Launcher implements a peering mechanism whereby the user can click a button at some point in the initial connection wizard that says "My Router Knows My Tor Configuration."
When this button is clicked, a TLS connection can be made to the router IP, either through DNS discovery or by simply looking up the current gateway IP. Tor Launcher could then present the user with a randomart or randomwords representation of the TLS fingerprint of the router for confirmation that they are connecting to the expected device.
After this authentication step, Tor Launcher could obtain a set of bridge lines from the router via a simple JSON-RPC request, and then configure them as its bridges. The router enforces that these bridges (and only these bridges) can be connected to, or else a warning LED goes off.
While in the future it would be nice if the router could be configured with arbitrary PT bridge lines, it actually turns out that any public Guard node that has a DirPort can also be used as a bridge, so this configuration method need not be limited to censored users who perform such a configuration. In the uncensored case, the router could randomly select a Guard node for all users on the local network, which will also serve to reduce that local network's exposure to the Guard population, as well as reduce Guard-choice fingerprintability of the collection of devices on the local network.
The fact that the router is in control of the client configuration means that it serves as an additional security layer to protect against compromise of the browser. Since the browser and the rest of the end-user's computer have a much higher vulnerability surface that a router does, and are also much harder to audit for correctness than simply verifying a few bridge lines are as expected, this configuration direction is far superior than the browser automatically configuring the router. It also simplifies the user experience for setting up a whole group of people on a secure Tor network at once.
As I've said in the past, if someone is willing to deploy the router side of this protocol in an easy-to-use router formfactor, we would implement the corresponding part in Tor Launcher. This would cover Tails, Tor Browser, Tor Birdy, and Tor Messenger users right off the bat.
On 5/4/15, Mike Perry mikeperry@torproject.org wrote:
... In my opinion, the most interesting use case for these devices is where Tor Launcher implements a peering mechanism whereby the user can click a button at some point in the initial connection wizard that says "My Router Knows My Tor Configuration."
When this button is clicked, a TLS connection [.. with good auth ..] ...
After this authentication step, Tor Launcher could obtain a set of bridge lines from the router via a simple JSON-RPC request, and then configure them as its bridges. The router enforces that these bridges (and only these bridges) can be connected to, or else a warning LED goes off.
great; this resolves the bridge and pluggable transport deficiency! and reduces the impact of Browser vulnerability, among other benefits.
... any public Guard node that has a DirPort can also be used as a bridge, so this configuration method need not be limited to censored users who perform such a configuration. In the uncensored case, the router could randomly select a Guard node for all users on the local network, which will also serve to reduce that local network's exposure to the Guard population, as well as reduce Guard-choice fingerprintability of the collection of devices on the local network.
great; this addresses my other concern with local users relying on differing sets of guards among them, at the same time.
The fact that the router is in control of the client configuration means that it serves as an additional security layer to protect against compromise of the browser. Since the browser and the rest of the end-user's computer have a much higher vulnerability surface that a router does, and are also much harder to audit for correctness than simply verifying a few bridge lines are as expected, this configuration direction is far superior than the browser automatically configuring the router. It also simplifies the user experience for setting up a whole group of people on a secure Tor network at once.
agreed; this would make the best configuration, the best user experience, and more resilience against browser attacks.
thank you for the feedback!
As I've said in the past, if someone is willing to deploy the router side of this protocol in an easy-to-use router formfactor, we would implement the corresponding part in Tor Launcher. This would cover Tails, Tor Browser, Tor Birdy, and Tor Messenger users right off the bat.
Tor Birdy is another i had not considered, and is absolutely worth including, my own aversion to encrypted email aside...
as for the Tor Launcher coding, i'm holding you to that promise! :)
best regards, and thanks again,
On 5/4/15, Mike Perry mikeperry@torproject.org wrote:
... In my opinion, the most interesting use case for these devices is where Tor Launcher implements a peering mechanism whereby the user can click a button at some point in the initial connection wizard that says "My Router Knows My Tor Configuration."
hi Mike,
i called this "Device Driven Configuration" in the updated document, and added two FAQ entries regarding device public key verification for use in the JSON based device driven configuration between Tor Launcher and Tor enforcing device.
thanks again!
P.S. additional edits continue; any and all feedback still solicited. it's not too late... :)
On Sat, 2 May 2015 20:37:17 -0700 coderman coderman@gmail.com wrote:
a friend and i are working on a Tor router design that doesn't compromise anonymity for convenience. [0][1][2][3][4]
we're soliciting feedback as part of a go / no-go decision on continuing this effort.
in particular, the design is intended to meet the scrutiny of Nick M., Roger, and Mike P. as the focus on support for Tor Browser and Tor on each client indicates.
I am bored so I figured I would read this big document, here are some comments from somebody who doesn't matter:
1.3 > Warning conditions:
Is the "Client privacy leak detected" meaning the software would warn in the case of a LAN client attempting to make an unsecured connection or leak DNS data or somteihng like that? Provided the leak never makes it off the routing device, then I think that is an acceptable warning but if it leaves the device that's a pretty critical error in my opinion.
2.4 > Device software and configuration technical requirements
"Require VPN on local WiFi and Ethernet network " does this mean VPN connection to the router itself, as in establishing an IPSec tunnel from LAN_1 --> [Router] before any layer four traffic is allowed? I see the FAQ about Wifi, which makes sense, but extending the VPN requirement to the physical network I find odd.
I suggest also adding mandatory audit logging to the scope of the router software. In my opinion any and all state changes, whether automatic (Tor circuit change) or manual (administrator changing configuration) must be logged.
2.5/2.6 > Privacy Directory Requirements
Is the expectation that every client attached to the router would be running this privacy directory software or only the router administrator(s)? In the former case, is there any bad exit indication that could/would be made to the clients?
How is authentication and authorization of this privacy director software going to be performed with the router? In 1.2 the router would be passwordlessly set up, but after that how would an administrator ensure that only they are able to mutate the device set up?
Also "Filter local traffic that is not Tor when active", does this mean that the privacy director software will require escalated privileges on the numerous platforms into order to modify local firewall states?
Interesting effort, good luck
-warms0x
On 5/3/15, warms0x warms0x@riseup.net wrote:
... I am bored so I figured I would read this big document, here are some comments from somebody who took the time to care:
thanks! :)
1.3 > Warning conditions:
Is the "Client privacy leak detected" meaning the software would warn in the case of a LAN client attempting to make an unsecured connection or leak DNS data or somteihng like that?
correct. the client is filtering locally, and only expecting Tor Browser traffic to egress. if anything else is sent, not through Tor relays, it is dropped and warned about.
Provided the leak never makes it off the routing device, then I think that is an acceptable warning
correct. this leak traffic is dropped, then warned about.
2.4 > Device software and configuration technical requirements
"Require VPN on local WiFi and Ethernet network " does this mean VPN connection to the router itself, as in establishing an IPSec tunnel from LAN_1 --> [Router] before any layer four traffic is allowed? I see the FAQ about Wifi, which makes sense, but extending the VPN requirement to the physical network I find odd.
this is for three purposes. 1. WiFi is inherently insecure, per the RC4 defect. 2. if using open WiFi (no WPA-2 Enterprise EAP-TLS, nor any lesser privacy setting) you avoid TCP and other injection DoS attacks. 3. the privacy director is better able to transition between Public networks / Tor enforcing network if the VPN up is used as successful signal of pairing to expected router. Otherwise un-authenticated details like IP, MAC, hostname provide only tentative indication of Tor enforcing router upstream.
I suggest also adding mandatory audit logging to the scope of the router software. In my opinion any and all state changes, whether automatic (Tor circuit change) or manual (administrator changing configuration) must be logged.
this is an important detail; thank you for bringing it up. i will add the expected run logging and troubleshooting logging output collected on device and available to the owner via privacy director administrative access.
2.5/2.6 > Privacy Directory Requirements
Is the expectation that every client attached to the router would be running this privacy directory software or only the router administrator(s)? In the former case, is there any bad exit indication that could/would be made to the clients?
only the owner is required to run this software. it is recommended that other users on the Local network also run it, for the additional support it provides keeping Tor Browser current and handling local egress filtering when on a Tor enforcing network.
bad exit reporting requires the privacy director software, either as owner or normal user. there is no way to report bad exits through captive portal web UI on device.
How is authentication and authorization of this privacy director software going to be performed with the router? In 1.2 the router would be passwordlessly set up, but after that how would an administrator ensure that only they are able to mutate the device set up?
as device owner, the first time setup, which requires directly connecting to the device, provides key exchange for the remainder of administrative activities.
if you're not owner of the owner keys, you don't get to perform any administrative actions on the device.
Also "Filter local traffic that is not Tor when active", does this mean that the privacy director software will require escalated privileges on the numerous platforms into order to modify local firewall states?
yes to modify firewall state.
it is possible to run without elevated privileges, however, filtering traffic and disabling services cannot be performed and some automatic behaviors become manual (e.g. auto detecting transition on or off of a Tor enforcing network does not work in some scenarios, and the privacy director menu must be used to explicitly update view.)
it is possible to constrain the delegation of these privileges, like sudo for calling hooks around network changes, however this is currently outside scope given the poor state of client side security posture overall.
Interesting effort, good luck
thanks for the feedback! we'll need the luck...
best regards,
On Sat, May 02, 2015 at 08:37:17PM -0700, coderman wrote:
a friend and i are working on a Tor router design that doesn't compromise anonymity for convenience. [0][1][2][3][4]
So, unlike a transparent tor router, this system is not intended to prevent malicious software on client computers from being able to learn the client computer's location, right? An attacker who has compromised some client software just needs to control a single relay in the consensus, and they'll be allowed to connect to it directly?
It is unclear to me what exactly this kind of tor router *is* supposed to protect against. (I haven't read the whole document yet but I read a few sections including Threat Model and I'm confused.)
~leif
On 5/4/15, Leif Ryge leif@synthesize.us wrote:
... So, unlike a transparent tor router, this system is not intended to prevent malicious software on client computers from being able to learn the client computer's location, right?
hello Leif!
this deserves a longer answer, but you're right. if the attacker is using Tor itself a Tor enforcing gateway can't protect against those attacks.
DNS leaks, UDP exfil (like the MAC leaking attacks), Flash based proxy bypass. these types of attacks can be stopped and warned about.
a malicious relay, a malicious hidden services, at the network level a Tor enforcing router can't discriminate or help here.
An attacker who has compromised some client software just needs to control a single relay in the consensus, and they'll be allowed to connect to it directly?
you don't even need to control a relay, this could be performed via hidden service, as well.
It is unclear to me what exactly this kind of tor router *is* supposed to protect against. (I haven't read the whole document yet but I read a few sections including Threat Model and I'm confused.)
i will clarify to make this more apparent.
best regards,
On 5/4/15, coderman coderman@gmail.com wrote:
... this deserves a longer answer, but you're right. if the attacker is using Tor itself a Tor enforcing gateway can't protect against those attacks.
i have updated the document to make this more clear.
it is hard to express the nuance of vulnerability here. for example, on Windows, if you can access file APIs, even from within a sandbox, you can reference a network path (WebDAV, SMB, etc) that leverages system services to make a proxy bypass request, or socks wrapper bypass request.
that is a very different level of risk compared to arbitrary remote execution with priv escalation - at the end of that chain, your attacker can read serial numbers off components for a perfect match, then report the results back along the hidden service command and control link.
the first can be mitigated by a Tor enforcing router, while the second is game over every time.
there is a rich field of mixed threats in-between, and mitigating measures clients can take, but the short of it is that endpoint security is and always will be critical to security and privacy.
best regards, and thanks again for your questions!
p.s. i also changed the Onion service FAQ entry to mention that One-time ephemeral hostnames are used by default, with the persistent and vanity hostname options available to opt-in explicitly.
On 5/2/15, coderman coderman@gmail.com wrote:
... we're soliciting feedback as part of a go / no-go decision on continuing this effort.
in particular, the design is intended to meet the scrutiny of Nick M., Roger, and Mike P. as the focus on support for Tor Browser and Tor on each client indicates...
some queried about status:
1. feedback incorporated (as FAQ and edits).
2. copyright has been assigned.
3. Tor Project, Inc. has not reviewed nor approved the design.
4. availability for this project is worse, rather than improving. progress unlikely in near future.
best regards,