Hello Tor-Dev,
My name is Alex Mages, and I have been doing pluggable transport research with Eugene Vasserman (CC) at Kansas State University.
Right now we're exploring latency-based attacks but are having trouble achieving a particular goal: a way to “ping” an arbitrary node in a client’s already-built (“live”) circuit. One-way timing is ideal but round trip time would suffice. We can glean this information during circuit construction, but what about a “live” circuit? Ideally, this would be a periodic thing Tor already keeps track of, but as an on-demand or as a byproduct/side-effect of a different function would also work. We have not been able to find a way to do this within the Tor (sub)protocol specs or the control port spec.
Any ideas, suggestions, or criticisms would be greatly appreciated.
Thanks,
Alex Mages
Right now we're exploring latency-based attacks but are having trouble achieving a particular goal: a way to “ping” an arbitrary node in a client’s already-built (“live”) circuit. One-way timing is ideal but round trip time would suffice. We can glean this information during circuit construction, but what about a “live” circuit? Ideally, this would be a periodic thing Tor already keeps track of, but as an on-demand or as a byproduct/side-effect of a different function would also work. We have not been able to find a way to do this within the Tor (sub)protocol specs or the control port spec.
Use OnionCat and ping6, it is exactly what you want.
Such "timing" attacks are in the scope of "Tor Stinks -- NSA" document. Users should become familiar with them, and that slide deck, and other attacks from over a decade ago. And with how tor does not address them.
Well onioncat is not "arbitrary node" but is a set up one. Yet some timing differentiations can be divined by selectively constructing the "circuit" to test, looking at setup timings, pushing characterizing traffic through them and your own nodes, polling existing services, etc. Please publish your results.
Hey,
On 21.01.22 14:57, Alexander Mages wrote:
Right now we're exploring latency-based attacks but are having trouble achieving a particular goal: a way to “ping” an arbitrary node in a client’s already-built (“live”) circuit. One-way timing is ideal but round trip time would suffice. We can glean this information during circuit construction, but what about a “live” circuit? Ideally, this would be a periodic thing Tor already keeps track of, but as an on-demand or as a byproduct/side-effect of a different function would also work. We have not been able to find a way to do this within the Tor (sub)protocol specs or the control port spec.
You can measure the RTT between your client and a node by exiting through that node and intentionally violating its exit policy, such as connecting to 127.0.0.1:80. The node will return an error, and you can measure the RTT as the time between sending the request and receiving the error. See https://naviga-tor.github.io/ for an example.
All the best, Robert
I appreciate all the suggestions!
Thanks, Alex
On Sat, Feb 12, 2022 at 3:59 AM r.a@posteo.net wrote:
Hey,
On 21.01.22 14:57, Alexander Mages wrote:
Right now we're exploring latency-based attacks but are having trouble achieving a particular goal: a way to “ping” an arbitrary node in a client’s already-built (“live”) circuit. One-way timing is ideal but round trip time would suffice. We can glean this information during circuit construction, but what about a “live” circuit? Ideally, this would be a periodic thing Tor already keeps track of, but as an on-demand or as a byproduct/side-effect of a different function would also work. We have not been able to find a way to do this within the Tor (sub)protocol specs or the control port spec.
You can measure the RTT between your client and a node by exiting through that node and intentionally violating its exit policy, such as connecting to 127.0.0.1:80. The node will return an error, and you can measure the RTT as the time between sending the request and receiving the error. See https://naviga-tor.github.io/ for an example.
All the best, Robert