On 6 Nov. 2016, at 02:30, bancfc@openmailbox.org wrote:
On 2016-11-05 01:36, teor wrote:
On 5 Nov. 2016, at 11:26, Patrick Schleizer patrick-mailinglists@whonix.org wrote: Thank you for your answers! teor:
- Caching of DNS, HS descriptors, preemptive circuits, etc.
Can you please elaborate on 'etc.'? I am asking because stream isolation for DNS already has a ticket: https://trac.torproject.org/projects/tor/ticket/20555 HS cache isolation also has a ticket: https://trac.torproject.org/projects/tor/ticket/15938 Looks like preemptive circuits isolation does not have a ticket yet.
Preemptive circuits aren't a caching mechanism, and can't really be isolated in the way you think - circuits are isolated by existing mechanisms, but this is likely not enough to defend against hostile clients sharing an instance. Isolation is a defence against the remote end, not the client end.
If you could please elaborate on 'etc.' we might be able to complete the stack of missing tickets.
Circuit cannibalisation (yet another thing that can't be isolated) SSL state Guard state Consensus availability and content Descriptor availability and content Connectivity (or lack thereof) Uptime ControlPort config information ControlPort config changes And many more. The supported way to isolate many of these things is to run a separate Tor instance, preferably on a separate machine on a separate network. We don't even recommend running a SOCKS client and a hidden service on the same instance. ...
Thanks. This is very useful info to keep in mind since it affects usage advice. A
There is one more related scenario of single Gateway - Workstation pair where the Workstation VM is rolled back to a clean state between different sessions however only a NEWNYM is triggered on the Gateway without restarts. My guess is that many of the risks you detailed still hold so I want to find out what is the best opsec to handle this:
Do these risks persist across reboot of the Tor VM?
A reboot provides strong likability between all client and hidden service instances sharing the VM.
Some risks associated with on-disk state or external state persist, including:
Guard state Consensus availability and content Descriptor availability and content Connectivity (or lack thereof) Uptime ControlPort config information ControlPort config changes And many more.
Is restarting of the Tor process enough?
A tor restart provides strong likability between all client and hidden service instances sharing the VM.
Some risks associated with on-disk state or external state persist, including:
Guard state Consensus availability and content Descriptor availability and content Connectivity (or lack thereof) Uptime ControlPort config information ControlPort config changes And many more.
Or should we recommend instead that users make an initial clean snapshot of the Tor VM and roll back to this instead?
A roll back provides strong likability between all client and hidden service instances sharing the VM.
Some risks associated with on-disk state or external state persist, including:
Connectivity (or lack thereof) Uptime (Some) ControlPort config information ControlPort config changes And many more.
Also, what is "clean"? Before or after guards are chosen?
Because if the snapshot is before guards are chosen, then the user is likely to choose a malicious guard.
If the snapshot is after, local malicious clients can leak the guards.
This is probably the most obvious tradeoff, but I am sure there are more subtle state-based tradeoffs as well.
I want to be very clear here:
The supported way to isolate many of these things is to run a separate Tor instance, preferably on a separate machine on a separate network. We don't even recommend running a SOCKS client and a hidden service on the same instance.
Tor is not designed to defend against malicious clients that have access to the ControlPort.
Some of Tor's guarantees do not hold when multiple clients at different trust levels share a SOCKSPort.
T