Summarized question:
Do you recommend allowing Workstation VMs of different security levels to communicate with the same Tor instance? Note that they connect via separate internal networks to the Gateway and have different interfaces & controlports so inter-workstation communication should not be possible.
Single Tor Gateway, Multiple Workstations
Pros: *Same guard node means less chance of picking a malicious one *Single Gateway VM uses less resources
Cons: *Some unforeseen way malicious VM "X" can link activities of or influence traffic of VM "Y" **Maybe sending NEWNYM requests in a timed pattern that changes exit IPs of VM Y's traffic, revealing they are behind the same client? **Maybe eavesdropping on HSes running on VM Y's behalf? **Something else we are not aware of?
Multi-Tor Gateways mapped 1:1 to Workstation VMs
Pros: *Conceptually simple. Uses a different Tor instance so no need to worry about all these questions.
Cons: *Uses a different entry guard which can increase chance of running into a malicious relay that can deanonymize some of the traffic. * Uses extra resources (though not much as a Tor Gateway can run with as little as 192MB RAM)
On 22 Oct. 2016, at 07:38, bancfc@openmailbox.org wrote:
Summarized question:
Do you recommend allowing Workstation VMs of different security levels to communicate with the same Tor instance? Note that they connect via separate internal networks to the Gateway and have different interfaces & controlports so inter-workstation communication should not be possible.
Single Tor Gateway, Multiple Workstations
Pros: *Same guard node means less chance of picking a malicious one *Single Gateway VM uses less resources
Cons: *Some unforeseen way malicious VM "X" can link activities of or influence traffic of VM "Y" **Maybe sending NEWNYM requests in a timed pattern that changes exit IPs of VM Y's traffic, revealing they are behind the same client? **Maybe eavesdropping on HSes running on VM Y's behalf? **Something else we are not aware of?
* Caching of DNS, HS descriptors, preemptive circuits, etc. * VMs can leak other VM's guards and even entire circuits * easily without a control port filter * perhaps some discovery attacks even with a filter
Multi-Tor Gateways mapped 1:1 to Workstation VMs
Pros: *Conceptually simple. Uses a different Tor instance so no need to worry about all these questions.
Cons: *Uses a different entry guard which can increase chance of running into a malicious relay that can deanonymize some of the traffic.
- Uses extra resources (though not much as a Tor Gateway can run with as little as 192MB RAM)
* Links traffic at different guards to the same source IP address * Even VM-level isolation is not proof against some attacks
T
Thank you for your answers!
teor:
- Caching of DNS, HS descriptors, preemptive circuits, etc.
Can you please elaborate on 'etc.'?
I am asking because stream isolation for DNS already has a ticket: https://trac.torproject.org/projects/tor/ticket/20555
HS cache isolation also has a ticket: https://trac.torproject.org/projects/tor/ticket/15938
Looks like preemptive circuits isolation does not have a ticket yet.
If you could please elaborate on 'etc.' we might be able to complete the stack of missing tickets.
Cheers, Patrick
On 5 Nov. 2016, at 11:26, Patrick Schleizer patrick-mailinglists@whonix.org wrote:
Thank you for your answers!
teor:
- Caching of DNS, HS descriptors, preemptive circuits, etc.
Can you please elaborate on 'etc.'?
I am asking because stream isolation for DNS already has a ticket: https://trac.torproject.org/projects/tor/ticket/20555
HS cache isolation also has a ticket: https://trac.torproject.org/projects/tor/ticket/15938
Looks like preemptive circuits isolation does not have a ticket yet.
Preemptive circuits aren't a caching mechanism, and can't really be isolated in the way you think - circuits are isolated by existing mechanisms, but this is likely not enough to defend against hostile clients sharing an instance.
Isolation is a defence against the remote end, not the client end.
If you could please elaborate on 'etc.' we might be able to complete the stack of missing tickets.
Circuit cannibalisation (yet another thing that can't be isolated)
SSL state Guard state
Consensus availability and content Descriptor availability and content
Connectivity (or lack thereof) Uptime
ControlPort config information ControlPort config changes
And many more.
The supported way to isolate many of these things is to run a separate Tor instance, preferably on a separate machine on a separate network. We don't even recommend running a SOCKS client and a hidden service on the same instance.
T
On 2016-11-05 01:36, teor wrote:
On 5 Nov. 2016, at 11:26, Patrick Schleizer patrick-mailinglists@whonix.org wrote:
Thank you for your answers!
teor:
- Caching of DNS, HS descriptors, preemptive circuits, etc.
Can you please elaborate on 'etc.'?
I am asking because stream isolation for DNS already has a ticket: https://trac.torproject.org/projects/tor/ticket/20555
HS cache isolation also has a ticket: https://trac.torproject.org/projects/tor/ticket/15938
Looks like preemptive circuits isolation does not have a ticket yet.
Preemptive circuits aren't a caching mechanism, and can't really be isolated in the way you think - circuits are isolated by existing mechanisms, but this is likely not enough to defend against hostile clients sharing an instance.
Isolation is a defence against the remote end, not the client end.
If you could please elaborate on 'etc.' we might be able to complete the stack of missing tickets.
Circuit cannibalisation (yet another thing that can't be isolated)
SSL state Guard state
Consensus availability and content Descriptor availability and content
Connectivity (or lack thereof) Uptime
ControlPort config information ControlPort config changes
And many more.
The supported way to isolate many of these things is to run a separate Tor instance, preferably on a separate machine on a separate network. We don't even recommend running a SOCKS client and a hidden service on the same instance.
T
--
Thanks. This is very useful info to keep in mind since it affects usage advice. A
There is one more related scenario of single Gateway - Workstation pair where the Workstation VM is rolled back to a clean state between different sessions however only a NEWNYM is triggered on the Gateway without restarts. My guess is that many of the risks you detailed still hold so I want to find out what is the best opsec to handle this:
Do these risks persist across reboot of the Tor VM?
Is restarting of the Tor process enough?
Or should we recommend instead that users make an initial clean snapshot of the Tor VM and roll back to this instead?
On 6 Nov. 2016, at 02:30, bancfc@openmailbox.org wrote:
On 2016-11-05 01:36, teor wrote:
On 5 Nov. 2016, at 11:26, Patrick Schleizer patrick-mailinglists@whonix.org wrote: Thank you for your answers! teor:
- Caching of DNS, HS descriptors, preemptive circuits, etc.
Can you please elaborate on 'etc.'? I am asking because stream isolation for DNS already has a ticket: https://trac.torproject.org/projects/tor/ticket/20555 HS cache isolation also has a ticket: https://trac.torproject.org/projects/tor/ticket/15938 Looks like preemptive circuits isolation does not have a ticket yet.
Preemptive circuits aren't a caching mechanism, and can't really be isolated in the way you think - circuits are isolated by existing mechanisms, but this is likely not enough to defend against hostile clients sharing an instance. Isolation is a defence against the remote end, not the client end.
If you could please elaborate on 'etc.' we might be able to complete the stack of missing tickets.
Circuit cannibalisation (yet another thing that can't be isolated) SSL state Guard state Consensus availability and content Descriptor availability and content Connectivity (or lack thereof) Uptime ControlPort config information ControlPort config changes And many more. The supported way to isolate many of these things is to run a separate Tor instance, preferably on a separate machine on a separate network. We don't even recommend running a SOCKS client and a hidden service on the same instance. ...
Thanks. This is very useful info to keep in mind since it affects usage advice. A
There is one more related scenario of single Gateway - Workstation pair where the Workstation VM is rolled back to a clean state between different sessions however only a NEWNYM is triggered on the Gateway without restarts. My guess is that many of the risks you detailed still hold so I want to find out what is the best opsec to handle this:
Do these risks persist across reboot of the Tor VM?
A reboot provides strong likability between all client and hidden service instances sharing the VM.
Some risks associated with on-disk state or external state persist, including:
Guard state Consensus availability and content Descriptor availability and content Connectivity (or lack thereof) Uptime ControlPort config information ControlPort config changes And many more.
Is restarting of the Tor process enough?
A tor restart provides strong likability between all client and hidden service instances sharing the VM.
Some risks associated with on-disk state or external state persist, including:
Guard state Consensus availability and content Descriptor availability and content Connectivity (or lack thereof) Uptime ControlPort config information ControlPort config changes And many more.
Or should we recommend instead that users make an initial clean snapshot of the Tor VM and roll back to this instead?
A roll back provides strong likability between all client and hidden service instances sharing the VM.
Some risks associated with on-disk state or external state persist, including:
Connectivity (or lack thereof) Uptime (Some) ControlPort config information ControlPort config changes And many more.
Also, what is "clean"? Before or after guards are chosen?
Because if the snapshot is before guards are chosen, then the user is likely to choose a malicious guard.
If the snapshot is after, local malicious clients can leak the guards.
This is probably the most obvious tradeoff, but I am sure there are more subtle state-based tradeoffs as well.
I want to be very clear here:
The supported way to isolate many of these things is to run a separate Tor instance, preferably on a separate machine on a separate network. We don't even recommend running a SOCKS client and a hidden service on the same instance.
Tor is not designed to defend against malicious clients that have access to the ControlPort.
Some of Tor's guarantees do not hold when multiple clients at different trust levels share a SOCKSPort.
T
On 21/10/16 21:38, bancfc@openmailbox.org wrote:
Cons: *Some unforeseen way malicious VM "X" can link activities of or influence traffic of VM "Y" **Maybe sending NEWNYM requests in a timed pattern that changes exit IPs of VM Y's traffic, revealing they are behind the same client? **Maybe eavesdropping on HSes running on VM Y's behalf? **Something else we are not aware of?
If each VM has full access to the control port, even something as simple as "SETCONF DisableNetwork" could be used for traffic confirmation.
ExcludeNodes, ExcludeExitNodes and MapAddress could be used to force another VM's traffic through certain nodes.
Bandwidth events could be used for traffic analysis of another VM's traffic.
ADDRMAP events look like they might leak information about the hosts another VM connects to. Likewise DANGEROUS_PORT leaks information about ports, HS_DESC about HS descriptor lookups.
I'm not sure if covert channels between two VMs (e.g. for exfiltration) are part of your threat model, but events would be a rich source of those too.
Cheers, Michael