The stem documentation for create_ephemeral_hidden_service [1] says:
"Changed in version 1.5.0: Added support for non-anonymous services."
But I can't figure out to actually use this feature. There doesn't seem
to be a new argument to say if you want your onion service to be
non-anonymous.
It also says, "Changed in version 1.5.0: Added the basic_auth argument."
But there's a new basic_auth argument you can pass into the function to
use that.
[1]
https://stem.torproject.org/api/control.html#stem.control.Controller.create…
Attached is the second draft of the Pluggable Transport 2.0 Specification.
If you have feedback on this draft, please send me your comments by July 20.
Hi Prateek, Yixin, (and please involve your other authors as you like),
(I'm including tor-dev here too so other Tor people can follow along,
and maybe even get involved in the research or the discussion.)
I looked through "Counter-RAPTOR: Safeguarding Tor Against Active
Routing Attacks":
https://arxiv.org/abs/1704.00843
For the tl;dr for others here, the paper: a) comes up with metrics for
how to measure resilience of Tor relays to BGP hijacking attacks, and
then does the measurements; b) describes a way that clients can choose
their guards to be less vulnerable to BGP hijacks, while also considering
performance and anonymity loss when guard choice is influenced by client
location; and c) builds a monitoring system that takes live BGP feeds
and looks for routing table anomalies that could be hijack attempts.
Here are some hopefully useful thoughts:
-----------------------------------------------------------------------
0) Since I opted to write these thoughts in public, I should put a
little note here in case any journalists run across it and wonder. Yay
research! We love research on Tor -- in fact, research like this is the
reason Tor is so strong. For many more details about our perspective on
Tor research papers, see
https://blog.torproject.org/blog/tor-heart-pets-and-privacy-research-commun…
-----------------------------------------------------------------------
1a) The "live BGP feed anomaly detection" part sounds really interesting,
since in theory we could start using it really soon now. Have you
continued to run it since you wrote the paper? Have you done any more
recent analysis on its false positive rate since then?
I guess one of the real challenges here is that since most of the alerts
are false positives, we really need a routing expert to be able to look
at each alert and assess whether we should be worried about it. How hard
is it to locate such an expert? Is there even such a thing as an expert
in all routing tables, or do we need expertise in "what that part of
the network is supposed to look like", which doesn't easily scale to
the whole Internet?
Or maybe said another way, how much headway can we make on automating
the analysis, to make the frequency of alerts manageable?
I ask because it's really easy to write a tool that sends a bunch of
warnings, and if some of them are false positives, or heck even if
they're not but we don't know how to assess how bad they really are,
then all we've done is make yet another automated emailer. (We've made
a set of these already, to e.g. notice when relays change their identity
key a lot:
https://gitweb.torproject.org/doctor.git/tree/
but often nobody can figure out whether such an anomaly is really an
attack or what, so it's a constant struggle to keep the volume low enough
that people don't just ignore the mails.)
The big picture question is: what steps remain from what you have now
to something that we can actually use?
1b) How does your live-BGP-feed-anomaly-detector compare (either in
design, or in closeness to actually being usable ;) to the one Micah
Sherr was working on from their PETS 2016 paper?
https://security.cs.georgetown.edu/~msherr/reviewed_abstracts.html#tor-data…
1c) Your paper suggests that an alert from a potential hijack attempt
could make clients abandon the guard for a while, to keep clients safe
from hijack attempts. What about second-order effects of such a design,
where the attacker's *goal* is to get clients to abandon a guard, so they
add some sketchy routes somewhere to trigger an alert? Specifically,
how much easier is it to add sketchy routes that make it look like
somebody is attempting an attack, compared to actually succeeding at
hijacking traffic?
I guess a related question (sorry for my BGP naivete) is: if we're worried
about false positives in the alerts, how much authentication and/or
attribution is there for sketchy routing table entries in general? Can
some jerk drive up our false positive rate, by adding scary entries
here and there, in a way that's sustainable? Or heck, can some jerk DDoS
parts of the Internet in a way that induces routing table changes that
we think look sketchy? These are not reasons to not take the first steps
in the arms race, but it's good to know what the later steps might be.
-----------------------------------------------------------------------
2a) Re changing guard selection, you should check out proposal 271,
which resulted in the new guard-spec.txt as of Tor 0.3.0.x:
https://gitweb.torproject.org/torspec.git/tree/guard-spec.txt
I don't fully understand it yet (so many things!), but I bet any future
guard selection change proposal should be relative to this design.
2b) Your guard selection algorithm makes the assumption that relays with
the Guard flag are the only ones worth choosing from, and then describes
a way to choose from among them with different weightings. But you could
take a step back, and decide that resilience to BGP hijack should be one
of the factors for whether a relay gets the Guard flag in the first place.
It sounded from your analysis like some ASes, like OVH, are simply
bad news for (nearly) all Tor clients. Your proposed guard selection
strategy reduced, but did not eliminate, the chances that clients would
get screwed by picking one of these OVH relays. And the tradeoff was
that by only reducing the chances, you left the performance changes not
as extreme as you might have otherwise.
How much of the scariness of a relay is a function of the location of
the particular client who is considering using it, and how much is a
function of the average (expected) locations of clients? That is, can we
identify relays that are likely to be bad news for many different clients,
and downplay their weights (or withhold the Guard flag) for everybody?
The advantage of making the same decision for all clients is that you
can get rid of the "what does guard choice tell you about the client"
anonymity question, which is a big win if the rest of the effects aren't
too bad.
Which leads me to the next topic:
-----------------------------------------------------------------------
3) I think you're right that when analyzing a new path selection strategy,
there are three big things to investigate:
a) Does the new behavior adequately accomplish the goal that made you want
a new path selection strategy (in this case resilience to BGP attacks)?
b) What does the new behavior do to anonymity, both in terms of the global
effect (e.g. by flattening the selection weights or by concentrating
traffic in fewer relays or on fewer networks) and on the individual
epistemic side (e.g. by leaking information about the user because of
behavior that is a function of sensitive user details)?
c) What are the expected changes to performance, and are there particular
scenarios (like high load or low load) that have higher or lower impact?
I confess that I don't really buy your analysis for 'b' or 'c' in this
paper. Average change in entropy doesn't tell me whether particular user
populations are especially impacted, and a tiny Shadow simulation with
one particular network load and client behavior doesn't tell me whether
things will or won't get much worse under other network loads or other
client behavior.
I can't really fault this paper though, because the structure of an
academic research paper means you can only do so much in one paper, and
you did a bunch of other interesting things instead. We, the Tor research
community, really need better tools for reasoning about the interaction
between anonymity and performance.
In fact, there sure have been a lot of Tor path selection papers over
the past decade which each invent their own ad hoc analysis approach
for showing that their proposed change doesn't impact anonymity or
performance "too much". Is it time for a Systemization of Knowledge
paper on this area -- with the goal of coming up with best practices
that future papers can use to provide more convincing analysis?
--Roger
Hello,
here is some background information and summarizing of proposal 247
"Defending Against Guard Discovery Attacks using Vanguards" for people
who plan to work on this in the short-term future.
I include a list of open design topics (probably not exhaustive) and a list of
engineering topics. Some engineering stuff can be done parallel to the design stuff.
==================== Background info ====================
* Proposal: https://gitweb.torproject.org/torspec.git/tree/proposals/247-hs-guard-disco…
* Discussion:
** Initial prop247 thread: https://lists.torproject.org/pipermail/tor-dev/2015-July/009066.html
** Recent prop247 thread: https://lists.torproject.org/pipermail/tor-dev/2015-September/009497.html
** Reading group notes of prop247: https://lists.torproject.org/pipermail/tor-dev/2016-January/010265.html
==================== Design topics ====================
* Optimize proposal parameters
** Optimize guardset sizes
** Optimize guardset lifetimes and prob distributions (minXX/maxXX/uniform?)
** To take informed decision, we might need a prop247 simulator, or an actual PoC with txtorcon
* HOW to choose second-layer and third-layer guards?
** Should they be Guards? middles? Vanguards? Serious security / load balancing implications!
** Can guardsets share guards between them or are they disjoint? Particularly third-layer sets
** background: https://lists.torproject.org/pipermail/tor-dev/2016-January/010265.html
* HOW to avoid side-channel guard discovery threats?
** Can IP/RP be the same as first-layer guard?
** Can first-layer guard be the same as third-layer guard?
** background: https://gitweb.torproject.org/user/mikeperry/torspec.git/commit/?h=guard_di…
* Change path selection for IP circs to avoid third-layer guard linkability threats.
** Switch from [HS->G1->M->IP] to [HS->G1->G2->G3->IP] or even to [HS->G1->G2->G3->M->IP].
** Consider the latter option for HSDir circs as well?
** background: https://gitweb.torproject.org/user/mikeperry/torspec.git/commit/?h=guard_di…
* Should prop247 be optional or default?
** Consider making it optional for a testing period?
* How does prop247 affects network performance and load balancing?
** especially if it's enabled by default?
** Update load balancing proposal?
* Correct behavior for multiple HSes on single host?
* Does prop247 influence guard fingerprinting (#10969) and should we care enough?
==================== Engineering topics ====================
* What's a good entrynodes API to implement prop247?
* What's a good state file API to implement prop247?
* Write prop247 simulator to verify security goals and optimize proposal parameters (see above).
* Write PoC with txtorcon!
* Write PoC with little-t-tor!
============================================================
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
Hi there!
As planned in May [0] the next a major release of Tor Metrics Library
is available:
https://dist.torproject.org/metrics-lib/2.0.0/
This time we have a special blog post about this release and Tor Metrics Library [1].
Please direct comments and questions to the metrics-team mailing list [2], or comment
on the blog post.
Cheers,
iwakeh
[0] https://lists.torproject.org/pipermail/tor-dev/2017-May/012261.html
[1] https://blog.torproject.org/blog/tor-descriptors-la-carte-tor-metrics-libra…
[2] https://lists.torproject.org/cgi-bin/mailman/listinfo/metrics-team
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAEBCAAGBQJZVR3LAAoJEPeEx9Sa/gvi/UkP/0X0mB7fkjfnbQDHOBrrzJdL
CIM1pxubdoTvfr3I9Gy/3EYBP2ps/eqEa1bPP6WB0eg/BnLkEhuRtu4mFd35oEDB
67XPGfBwv7ICww4/9NXBQ37aNeBpwuQ6yZopoymY6Pis/RYiO1NG89GWCIMeq2I8
w4rDf2ElMAjYY2Vw0lLlKxL+v1eja2ChuaYpXhJvqxZV7qQsmvi9oOtQ5YYHNJyW
+c5c0ajbO8LtC6Bd40XNjVBV3ayhp9iwbKFKbsvnkrlHHUvdMwRwk6CeE8iiK0FG
YVObe6hljLR9IQjfaRYBf4moMg/rZdacJtu2rqBR9MubL/i7//+r0wgxk3za/rok
kzNaoO9iz13FGBE/4Rum9CsN/8gAJuOn73JyeApB6HohXlAQfIwUtmpVcg/EaWXb
flyJDRrxUIWilwuOTfyunpu/BEuR3bSawYorvLasna2hIPG14DtVrHzNCY3TlxFz
2GJn0vegDGJI9QpgTW8J2VCGQCPXcFqvIKfl0IM3p6lJWw8VuE4K6XchlebHgJaH
qzYwYvfgtGD5GbpdssBOFocYPwOP0jQqgkEiZPMpkdl9U27o7DmHrnd9E54NCDYw
cOAfu9s1OpGjgiQuwli0UoZHVoZtDIghmZMOC1hUzmlz5W/be7GgThRGix9ZMfXF
BSAGkbAy1AlwN1kGEJMx
=i4rq
-----END PGP SIGNATURE-----
Hi all,
I've opened a Trac ticket at
https://trac.torproject.org/projects/tor/ticket/22745 about possibly
requiring all bug fixes to have associated regression tests. This is
aimed at Core Tor (starting with 0.3.2.x) but other Tor-related software
might want to consider a policy like this as well.
Ideally all bug fixes will have automated regression tests so we can
promptly recognize when they've regressed and fix them. I realize that
some of our code may be too complex for an automated regression test to
be feasible, so we would have a procedure for exceptions from this
requirement. (This would become part of the patch review process.)
Please comment on the ticket if you have opinions about this idea. (Or
respond in email if your reply wouldn't work well in the form of a
ticket comment.)
Thanks!
-Taylor
Hi,
The time period overlap section 2.2.4 in prop224 is under-specified:
https://gitweb.torproject.org/torspec.git/tree/proposals/224-rend-spec-ng.t…
1. During the overlap period, does the service use the new blinded key
for the new period, and the old blinded key for the old period?
I think the answer is yes, but this requires some deduction to work
out.
2. If the overlap period starts when a service sees the first consensus
with a new SRV, does the service stop using that SRV and blinded key:
* at the end of the period?
(that is, exactly 36 hours after the earliest the overlap period
could possibly have started.)
* exactly 36 hours after the SRV was first seen?
(that is, exactly 36 hours after the service started the overlap
period. For example, if the service fetched the consensus 2 hours
after it was created, it would end 2 hours after the end of the
period.)
* when the first reveal consensus is received with that SRV as the
previous SRV? (or some similar consensus-driven event)
Does every service on a tor instance start the overlap at the same
time?
https://gitweb.torproject.org/torspec.git/tree/proposals/224-rend-spec-ng.t…
T
--
Tim Wilson-Brown (teor)
teor2345 at gmail dot com
PGP C855 6CED 5D90 A0C5 29F6 4D43 450C BA7F 968F 094B
ricochet:ekmygaiu4rzgsk6n
xmpp: teor at torproject dot org
------------------------------------------------------------------------
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512
Hi teor and Daniel!
Thank you so much for your reply! Your instructions are really helpful
to me.
teor:
> You probably need a %include directive in /etc/tor/torrc.
I tried to add %include directive in /etc/tor/torrc and
/usr/share/tor/tor-service-defaults-torrc separately. And both of them
worked well :)
teor:
> If you get this working, please submit a patch to the Debian bug
> tracker.
No problem! But please forgive my ignorance, could you please explain
a little bit more to me that why I should report it to Debian BTS,
instead of tpo? In other words, what is the relationship between
packages.debian.org and deb.torproject.org?
According to cypherpunks[0]:
> The first released tor version with this feature is 0.3.1.1-alpha.
> As usual there will be alpha packages on deb.torproject.org
>
> If you want this feature _now_ you can use the nightly builds:
> https://deb.torproject.org/torproject.org/dists/tor-nightly-master-str
etch
However,
>
the highest tor version in Debian BTS right now is
3.0.8-1.[1], which means the feature has not been included into Debian?
My current thought is deb.torproject.org is the upstream of
packages.debian.org in terms of tor package. So a change made in
deb.torproject.org will be adopted by packages.debian.org after a while.
The following is my testing environment which may be helpful to the
problem:
I tested the torrc.d feature in both Debian 8 and Whonix13(based on
Debian8). Instead of downloading from packages.debian.org, I download
tor from:
> deb http://deb.torproject.org/torproject.org jessie main deb-src
> http://deb.torproject.org/torproject.org jessie main deb
> http://deb.torproject.org/torproject.org tor-nightly-master-jessie
> main
The Tor version I tested was:
> Tor version 0.3.1.3-alpha-dev (git-a73d0fe9a87df762+b433dff)
Again, thank you very much, teor and Daniel! I really appreciate your
help!
Best,
iry
[0]: https://trac.torproject.org/projects/tor/ticket/1922
[1]: https://packages.debian.org/source/experimental/tor
-----BEGIN PGP SIGNATURE-----
iQIcBAEBCgAGBQJZUb3/AAoJEKFLTbxtzdU8jQsP/2B29N3WpkypE7DQ1A2wtjd9
MP3Blz9lbJH8LQJw+juaHhzslokCXwpGJSl7OfyqhRL44VkXnTllfd88HacW5luM
PqZ4OeljB2UYrpDM7TypjdA+RXmSLTKNCmFifCESQPcmsu97qxlcvkgtF69fJ8eX
p/22PEOjsqnows37rYp0AZiLa7x9I1dZdjzD6tHfKNVylM5AuuyOExAZKlx90rlW
lL1nCJEuw/d6HBtVWWBgYDjeAMg5G0TpO2gU1j/A7kEzgZxWID6q/r1TvpDp1pi3
aJhjmyeZn2rVwdGPFX03pp7DWJetbYA5CuhGFucjPtkdXDCh2guuSsivcM6obpiA
EmI4QtedGGhy2Xp1ufLAHP89TuuSU1n9VaioHSpLlGkQA9v2PRjW4ChBVJwP6mVt
9XGhY7TgEQMVZWL/brCNI6aeCf8rfaBSHtMwAR6xQ6ofvYAjl7X1BpNOSx4s73B/
CWnGxvMNDPuu1scxkOF0y3M68p+xgT0UuHTx4JjUTQtESFTqtdua5JxLyzE2qGyz
pwzFjFzwoPBav5TPY7MB8WN79GImZFUiELA6vOL4CLaXEMCepsXYDhdP/3JLkD0e
GQfliduoa2KGUijKrzy+ZVrKwBjF63ylbKGrxJHnX1SB8OftYbdlIQRvhzwHwaOL
uSfnWGIg3Fo0fKs1/F/Q
=L3H9
-----END PGP SIGNATURE-----
Good day everyone!
A friend and I are looking into implementing a few of the ideas listed
in [0] as part of a university project. We saw quite a lot of discussion
on idea 1, but we're mainly interested in working on ideas 2.5 and 3,
and additionally either 2.1 or 4. Has there been discussion and work on
these yet? Where can we find it? Is there anything else that you can
tell us that we should know?
Thank you very much for your time!
Kind regards,
heddha
[0]: https://blog.torproject.org/blog/cooking-onions-names-your-onions