Hi everyone,
This is the first status update of my work an ahmia.fi.
I spent the first week of 'coding period' working on a couple of things.
## Automating blacklisting of onions containing child abuse content.
I implemented an automation process which runs once a week and crawls
onions having child abuse content from uncensored wiki and bans it from the
search results [1].
## I have been working on the Add page to save added onions on a database
and update the elasticsearch servers on a daily basis [2].
During the next two weeks I plan to complete the work left on Add page and
replace Polipo with Torsocks5 proxy in ahmia-crawler.
That's it for this week,
Have a nice day.
Pushkar Pathak
[1] https://github.com/ahmia/ahmia-index/pull/3
[2] https://github.com/mdhash/ahmia-site
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
Hi there!
A minor bugfix release of metrics-lib/DescripTor is available:
https://dist.torproject.org/descriptor/1.8.1/
This release fixes a regression in parsing microdescriptors and
version 1 directories introduced in 1.8.0.
The problem only affects CollecTor, which builds on metrics-lib, in so
far that the named descriptors were not recognized and caused plenty of
log messages, but were stored correctly anyway.
Please direct comments and questions to the metrics-team mailing list [2].
Cheers,
iwakeh
[1] https://gitweb.torproject.org/metrics-lib.git/plain/CHANGELOG.md?id=e0e5d87…
[2] https://lists.torproject.org/cgi-bin/mailman/listinfo/metrics-team
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAEBCAAGBQJZO/GvAAoJEPeEx9Sa/gviNI0P/1X7wXJesXsFTz0c6BCo6TSQ
UcpB0+vYozCJWOc7QznsS+R4zUqLwucSKLMXyBQ8wV5IaDwx0pEvCTkNt85Qbx1b
E+SgWGuWpv/dZOcyyOukw1DUq2Flo+OGAN/NL+Rbe3rzm1QciZIHUiyLWCrBYLnK
e91/Am3IchGI7UPvVe3xviB70F1g2Ewh2bZ5ePepPUfKhtbymNqsSn6fQ6PKwHjk
nSWoYRG8fGpLqQ/8K/ClgLes5C7oT+cvwLuPmPxi+iVXTpZypohDr6SfJBgYxmNJ
7KEcGtQ1LV974vS9F8qBHGKoRDIT6l7LatlB+4j/zAzWz6AJ7POfSnqU7A6+6Od8
trzCRweJVq+MpyIM0ly4K2e3CONRSibyyShrfXXDOcniWamiWPllsVMomiE1bkCB
6RwScQyh0Mkp4l9pA6I9Hn8msaWMjudph7Bv7ZbtqB1sTEvHZTTcZL9XEa7pPwX9
L8RNidaY2AykYe8iWttAB+rrXqWfbyHLTRhANgQPw3XSNlKSHapJQYk2nNyVrP5A
T+ohBUSj6lwxNP4zMEDXRcwCYBXqz/PTLirz1ocSMJfAUYWnO6qI9QAOLYxbWExh
6tTOAYhMrKHrqlPdWJPLS+M9Czp8IWM7TYOCotgV/bqbsRIRDzn9wEtDxkVv7c94
9Yi+WrraHC9iVjpqX5wY
=5lKs
-----END PGP SIGNATURE-----
Hi everyone,
This is the first status update of my work an ahmia.fi.
I spent the first week of 'coding period' working on a couple of things.
## Automating blacklisting of onions containing child abuse content.
I implemented an automation process which runs once a week and crawls
onions having child abuse content from uncensored wiki and bans it from the
search results [1].
## I have been working on the Add page to save added onions on a database
and update the elasticsearch servers on a daily basis [2].
During the next two weeks I plan to complete the work left on Add page and
replace Polipo with Torsocks5 proxy in ahmia-crawler.
That's it for this week,
Have a nice day.
Pushkar Pathak
[1] https://github.com/ahmia/ahmia-index/pull/3
[2] https://github.com/mdhash/ahmia-site
I spent some time trying to clean up proposal 247 based on everyone's
comments, as well as based on my own thoughts. Please have a look if you
commented on the original proposal, and complain if I've not taken your
thoughts into account.
(Aaron: In particular, I made several tradeoffs in favor of performance
and DoS resistance that may be at odds with some of your suggestions,
but I think the end result is still OK after looking into the Sybil
rates and thinking about the adversary model in more detail. You may
disagree).
I've attached my updated version of the proposal inline in this mail,
but the canonical updated proposal is in my remote at:
https://gitweb.torproject.org/user/mikeperry/torspec.git/tree/proposals/247…
Here's a summary of the changes (which are also listed in Git):
* Try to make a coherent threat model and specify its assumptions
* Fold in my comments about using disjoint sets ("buckets") for the
third level guard.
* Make the parameter discussion subsection its own section, and include
tables with far more detail for the Sybil success rates.
* Put the rotation period in a separate subsection from the number of
guards
* Switch to using min(X,X) and max(X,X) for the distribution for the
second and third layer guard lifespans, respectively. Add a
subsection describing this distribution (3.2.3)
* Changed the default parameters based on these tables, and based on my own
intuition about Tor's performance properties.
* Move the load balancing, torrc, and other performance considerations to
their own section (Section 5).
* Move "3.2. Distinguishing new HS circuits from normal HS circuits" to
section 4.1.
* Fold in some of "3.3. Circuit nodes can now be linked to specific hidden
services" into 4.1. Some of it I just removed, though, because I did not
find it credible.
* Added Roger's concerns about guard linkability to Section 4.2.
* Added a denial of service subsection to Section 4.3.
================================
Filename: 247-hs-guard-discovery.txt
Title: Defending Against Guard Discovery Attacks using Vanguards
Author: George Kadianakis
Created: 2015-07-10
Status: Draft
0. Motivation
A guard discovery attack allow attackers to determine the guard
node of a Tor client. The hidden service rendezvous protocol
provides an attack vector for a guard discovery attack since anyone
can force an HS to construct a 3-hop circuit to a relay (#9001).
Following the guard discovery attack with a compromise and/or
coercion of the guard node can lead to the deanonymization of a
hidden service.
1. Overview
This document tries to make the above guard discovery + compromise
attack harder to launch. It introduces an optional configuration
option which makes the hidden service also pin the second and third
hops of its circuits for a longer duration.
With this new path selection, we force the adversary to perform a
Sybil attack and two compromise attacks before succeeding. This is
an improvement over the current state where the Sybil attack is
trivial to pull off, and only a single compromise attack is required.
With this new path selection, an attacker is forced to do a one or
more node compromise attacks before learning the guard node of a hidden
service. This increases the uncertainty of the attacker, since
compromise attacks are costly and potentially detectable, so an
attacker will have to think twice before beginning a chain of node
compromise attacks that he might not be able to complete.
1.1. Visuals
Here is how a hidden service rendezvous circuit currently looks like:
-> middle_1 -> middle_A
-> middle_2 -> middle_B
-> middle_3 -> middle_C
-> middle_4 -> middle_D
HS -> guard -> middle_5 -> middle_E -> Rendezvous Point
-> middle_6 -> middle_F
-> middle_7 -> middle_G
-> middle_8 -> middle_H
-> ... -> ...
-> middle_n -> middle_n
this proposal pins the two middles nodes to a much more restricted
set, as follows:
-> guard_3A_A
-> guard_2_A -> guard_3A_B
-> guard_3A_C -> Rendezvous Point
HS -> guard_1
-> guard_3B_D
-> guard_2_B -> guard_3B_E
-> guard_3B_F -> Rendezvous Point
Note that the third level guards are partitioned into buckets such that
they are only used with one specific second-level guard. In this way,
we ensure that even if an adversary is able to execute a Sybil attack
against the third layer, they only get to learn one of the second-layer
Guards, and not all of them. This prevents the adversary from gaining
the ability to take their pick of the weakest of the second-level
guards for further attack.
2. Design
This feature requires the HiddenServiceGuardDiscovery torrc option
to be enabled.
When a hidden service picks its guard nodes, it also picks two
additional sets of middle nodes `second_guard_set` and
`third_guard_set` of size NUM_SECOND_GUARDS and NUM_THIRD_GUARDS
respectively for each hidden service. These sets are unique to each
hidden service created by a single Tor client, and must be kept separate
and distinct.
When a hidden service needs to establish a circuit to an HSDir,
introduction point or a rendezvous point, it uses nodes from
`second_guard_set` as the second hop of the circuit and nodes from
that second hop's corresponding `third_guard_set` as third hops of
the circuit.
A hidden service rotates nodes from the 'second_guard_set' at a random
time between MIN_SECOND_GUARD_LIFETIME hours and
MAX_SECOND_GUARD_LIFETIME hours.
A hidden service rotates nodes from the 'third_guard_set' at a random
time between MIN_THIRD_GUARD_LIFETIME and MAX_THIRD_GUARD_LIFETIME
hours.
These extra guard nodes should be picked with the same path selection
procedure that is used for regular middle nodes (though see Section
5.1 for performance reasons to restrict this slightly).
Each node's rotation time is tracked independently, to avoid disclosing
the rotation times of the primary and second-level guards.
XXX how should proposal 241 ("Resisting guard-turnover attacks") be
applied here?
2.1. Security parameters
We set NUM_SECOND_GUARDS to 4 nodes and NUM_THIRD_GUARDS to 16 nodes (ie
four sets of four).
XXX: 3 and 12 might be another option here, in which case our rotation
period for the second guard position can be reduced to 15 days.
We set MIN_SECOND_GUARD_LIFETIME to 1 day, and
MAX_SECOND_GUARD_LIFETIME to 33 days, for an average rotation rate of
~11 days, using the min(X,X) distribution specified in Section 3.2.2.
We set MIN_THIRD_GUARD_LIFETIME to 1 hour, and MAX_THIRD_GUARD_LIFETIME
to 18 hours, for an average rotation rate of ~12 hours, using the max(X,X)
distribution specified in Section 3.2.2.
XXX make all the above consensus parameters? Yes. Very yes, especially
if we decide to change the primary guard lifespan.
See Section 3 for more analysis on these constants.
3. Rationale and Security Parameter Selection
3.1. Threat model, Assumptions, and Goals
Consider an adversary with the following powers:
- Can launch a Sybil guard discovery attack against any node of a
rendezvous circuit. The slower the rotation period of the node,
the longer the attack takes. Similarly, the higher the percentage
of the network is compromised, the faster the attack runs.
- Can compromise any node on the network, but this compromise takes
time and potentially even coercive action, and also carries risk
of discovery.
We also make the following assumptions about the types of attacks:
1. A Sybil attack is noisy. It will require either large amounts of traffic,
multiple test circuits, or both.
2. A Sybil attack against the second or first layer Guards will be
more noisy than a Sybil attack against the third layer guard, since the
second and first layer Sybil attack requires a timing side channel in
order to determine success, where as the Sybil success is almost
immediately obvious to third layer guard, since it will now be returned
as a rend point for circuits for the hidden service in question.
3. As soon as the adversary is confident they have won the Sybil attack,
an even more aggressive circuit building attack will allow them to
determine the next node very fast (an hour or less).
4. The adversary is strongly disincentivized from compromising nodes that
may prove useless, as node compromise is even more risky for the
adversary than a Sybil attack in terms of being noticed.
Given this threat model, our security parameters were selected so that
the first two layers of guards should be hard to attack using a Sybil
guard discovery attack and hence require a node compromise attack. Ideally,
we want the node compromise attacks to carry a non-negligible probability of
being useless to the adversary by the time they complete.
On the other hand, the outermost layer of guards should rotate fast enough to
_require_ a Sybil attack.
3.2. Parameter Tuning
3.2.1. Sybil rotation counts for a given number of Guards
The probability of Sybil success for Guard discovery can be modeled as
the probability of choosing 1 or more malicious middle nodes for a
sensitive circuit over some period of time.
P(At least 1 bad middle) = 1 - P(All Good Middles)
= 1 - P(One Good middle)^(num_middles)
= 1 - (1 - c/n)^(num_middles)
c/n is the adversary compromise percentage
In the case of Vanguards, num_middles is the number of Guards you rotate
through in a given time period. This is a function of the number of vanguards
in that position (v), as well as the number of rotations (r).
P(At least one bad middle) = 1 - (1 - c/n)^(v*r)
Here's detailed tables in terms of the number of rotations required for
a given Sybil success rate for certain number of guards.
1.0% Network Compromise:
Sybil Success One Two Three Four Five Six Eight Nine Ten Twelve Sixteen
10% 11 6 4 3 3 2 2 2 2 1 1
15% 17 9 6 5 4 3 3 2 2 2 2
25% 29 15 10 8 6 5 4 4 3 3 2
50% 69 35 23 18 14 12 9 8 7 6 5
60% 92 46 31 23 19 16 12 11 10 8 6
75% 138 69 46 35 28 23 18 16 14 12 9
85% 189 95 63 48 38 32 24 21 19 16 12
90% 230 115 77 58 46 39 29 26 23 20 15
95% 299 150 100 75 60 50 38 34 30 25 19
99% 459 230 153 115 92 77 58 51 46 39 29
5.0% Network Compromise:
Sybil Success One Two Three Four Five Six Eight Nine Ten Twelve Sixteen
10% 3 2 1 1 1 1 1 1 1 1 1
15% 4 2 2 1 1 1 1 1 1 1 1
25% 6 3 2 2 2 1 1 1 1 1 1
50% 14 7 5 4 3 3 2 2 2 2 1
60% 18 9 6 5 4 3 3 2 2 2 2
75% 28 14 10 7 6 5 4 4 3 3 2
85% 37 19 13 10 8 7 5 5 4 4 3
90% 45 23 15 12 9 8 6 5 5 4 3
95% 59 30 20 15 12 10 8 7 6 5 4
99% 90 45 30 23 18 15 12 10 9 8 6
10.0% Network Compromise:
Sybil Success One Two Three Four Five Six Eight Nine Ten Twelve Sixteen
10% 2 1 1 1 1 1 1 1 1 1 1
15% 2 1 1 1 1 1 1 1 1 1 1
25% 3 2 1 1 1 1 1 1 1 1 1
50% 7 4 3 2 2 2 1 1 1 1 1
60% 9 5 3 3 2 2 2 1 1 1 1
75% 14 7 5 4 3 3 2 2 2 2 1
85% 19 10 7 5 4 4 3 3 2 2 2
90% 22 11 8 6 5 4 3 3 3 2 2
95% 29 15 10 8 6 5 4 4 3 3 2
99% 44 22 15 11 9 8 6 5 5 4 3
The rotation counts in these tables were generated with:
def count_rotations(c, v, success):
r = 0
while 1-math.pow((1-c), v*r) < success: r += 1
return r
3.2.2. Rotation Period
As specified in Section 3.1, the primary driving force for the third
layer selection was to ensure that these nodes rotate fast enough that
it is not worth trying to compromise them, because it is unlikely for
compromise to succeed and yield useful information before the nodes stop
being used. For this reason we chose 1 to 18 hours, with a weighted
distribution (Section 3.2.3) causing the expected average to be 12 hours.
From the table in Section 3.2.1, it can be seen that this means that the
Sybil attack will complete with near-certainty (99%) in 29*12 hours
(14.5 days) for the 1% adversary, 3 days for the 5% adversary, and
1.5 days for the 10% adversary.
Since rotation of each node happens independently, the distribution of
when the adversary expects to win this Sybil attack in order to discover
the next node up is uniform. This means that on average, the adversary
should expect that half of the rotation period of the next node is already
over by the time that they win the Sybil.
With this fact, we choose our range and distribution for the second
layer rotation to be short enough to cause the adversary to risk
compromising nodes that are useless, yet long enough to require a
Sybil attack to be noticeable in terms of client activity. For this
reason, we choose a minimum second-layer guard lifetime of 1 day,
since this gives the adversary a minimum expected value of 12 hours for
during which they can compromise a guard before it might be rotated.
If the total expected rotation rate is 11 days, then the adversary can
expect overall to have 5.5 days remaining after completing their Sybil
attack before a second-layer guard rotates away.
3.2.3. Rotation distribution
In order to skew the distribution of the third layer guard towards
higher values, we use max(X,X) for the distribution, where X is a
random variable that takes on values from the uniform distribution.
In order to skew the distribution of the second layer guard towards
low values (to increase the risk of compromising useless nodes) we
skew the distribution towards lower values, using min(X,X).
Here's a table of expectation (arithmetic means) for relevant
ranges of X (sampled from 0..N).
The current choice for second-layer guards is noted with **, and
the current choice for third-layer guards is noted with ***.
Range Min(X,X) Max(X,X)
10 2.85 6.15
11 3.18 6.82
12 3.51 7.49
13 3.85 8.15
14 4.18 8.82
15 4.51 9.49
16 4.84 10.16
17 5.18 10.82***
18 5.51 11.49
19 5.84 12.16
20 6.18 12.82
21 6.51 13.49
22 6.84 14.16
23 7.17 14.83
24 7.51 15.49
25 7.84 16.16
26 8.17 16.83
27 8.51 17.49
28 8.84 18.16
29 9.17 18.83
30 9.51 19.49
31 9.84 20.16
32 10.17** 20.83
33 10.51 21.49
34 10.84 22.16
35 11.17 22.83
36 11.50 23.50
37 11.84 24.16
38 12.17 24.83
39 12.50 25.50
4. Security concerns and mitigations
4.1. Mitigating fingerprinting of new HS circuits
By pinning the middle nodes of rendezvous circuits, we make it
easier for all hops of the circuit to detect that they are part of a
special hidden service circuit with varying degrees of certainty.
The Guard node is able to recognize a Vanguard client with a high
degree of certainty because it will observe a client IP creating the
overwhelming majority of its circuits to just a few middle nodes in
any given 10-18 day time period.
The middle nodes will be able to tell with a variable certainty that
depends on both its traffic volume and upon the popularity of the
service, because they will see a large number of circuits that tend to
pick the same Guard and Exit.
The final nodes will be able to tell with a similar level certainty
that depends on their capacity and the service popularity, because they
will see a lot of rend handshakes that all tend to have the same second
hop.
The most serious of these is the Guard fingerprinting issue. When
proposal xxx-padding-negotiation is implemented, services that enable
this feature should use those padding primitives to create fake circuits
to random middle nodes that are not their guards, in an attempt to look
more like a client.
Additionally, if Tor Browser implements "virtual circuits" based on
SOCKS username+password isolation in order to enforce the re-use of
paths when SOCKS username+passwords are re-used, then the number of
middle nodes in use during a typical user's browsing session will be
proportional to the number of sites they are viewing at any one time.
This is likely to be much lower than one new middle node every ten
minutes, and for some users, may be close to the number of Vanguards
we're considering.
This same reasoning is also an argument for increasing the number of
second-level guards beyond just two, as it will spread the hidden
service's traffic over a wider set of middle nodes, making it both
easier to cover, and behave closer to a client using SOCKS virtual
circuit isolation.
4.2. Hidden service linkability
Multiple hidden services on the same Tor instance should use separate
second and third level guard sets, otherwise an adversary is trivially
able to determine that the two hidden services are co-located by
inspecting their current chosen rend point nodes.
Unfortunately, if the adversary is still able to determine that two or
more hidden services are run on the same Tor instance through some other
means, then they are able to take advantage of this fact to execute a
Sybil attack more effectively, since there will now be an extra set of
guard nodes for each hidden service in use.
For this reason, if Vanguards are enabled, and more than one hidden
service is configured, the user should be advised to ensure that they do
not accidentally leak that the two hidden services are from the same Tor
instance.
4.3. Denial of service
Since it will be fairly trivial for the adversary to enumerate the
current set of rend nodes for a hidden service, denial of service
becomes a serious risk for Vanguard users.
For this reason, it is important to support a large number of
third-level guards, to increase the amount of resources required to
bring a hidden service offline by DoSing just a few Tor nodes.
5. Performance considerations
The switch to a restricted set of nodes will very likely cause
significant performance issues, especially for high-traffic hidden
services. If any of the nodes they select happen to be temporarily
overloaded, performance will suffer dramatically until the next
rotation period.
5.1. Load Balancing
Since the second and third level "guards" are chosen from the set of all
nodes eligible for use in the "middle" hop (as per hidden services
today), this proposal should not significantly affect the long-term load
on various classes of the Tor network, and should not require any
changes to either the node weight equations, or the bandwidth
authorities.
Unfortunately, transient load is another matter, as mentioned
previously. It is very likely that this scheme will increase instances
of transient overload at nodes selected by high-traffic hidden services.
One option to reduce the impact of this transient overload is to
restrict the set of middle nodes that we chose from to some percentage
of the fastest middle-capable relays in the network. This may have
some impact on load balancing, but since the total volume of hidden
service traffic is low, it may be unlikely to matter.
5.2. Circuit build timeout
The adaptive circuit build timeout mechanism in Tor is what corrects
for instances of transient node overload right now.
The timeout will naturally tend to select the current fastest and
least-loaded paths even through this set of restricted routes, but it
may fail to behave correctly if there are a very small set of nodes in
each guard set, as it is based upon assumptions about the current path
selection algorithm, and it may need to be tuned specifically for
Vanguards, especially if the set of possible routes is small.
5.3. OnionBalance
At first glance, it seems that this scheme makes multi-homed hidden
services such as OnionBalance[1] even more important for high-traffic
hidden services.
Unfortunately, if it is equally damaging to the user for any of their
multi-homed hidden service locations to be discovered, then OnionBalance
is strictly equivalent to simply increasing the number of second-level
guard nodes in use, because an active adversary can perform simultaneous
Sybil attacks against all of the rend points offered by the multi-homed
OnionBalance introduction points.
5.4. Default vs optional behavior
We suggest this torrc option to be optional because it changes path
selection in a way that may seriously impact hidden service performance,
especially for high traffic services that happen to pick slow guard
nodes.
However, by having this setting be disabled by default, we make hidden
services who use it stand out a lot. For this reason, we should in fact
enable this feature globally, but only after we verify its viability for
high-traffic hidden services, and ensure that it is free of second-order
load balancing effects.
Even after that point, until Single Onion Services are implemented,
there will likely still be classes of very high traffic hidden services
for whom some degree of location anonymity is desired, but for which
performance is much more important than the benefit of Vanguards, so there
should always remain a way to turn this option off.
6. Future directions
Here are some more ideas for improvements that should be done sooner
or later:
- Maybe we should make the size and rotation period of
secondary/third guard sets to be configurable by the user.
- To make it harder for an adversary, a hidden service MAY extend
the path length of its circuits by an additional static hop. This
forces the adversary to use another coercion attack to walk the
chain up to the hidden service.
7. Acknowledgments
Thanks to Aaron Johnson, John Brooks, Mike Perry and everyone else
who helped with this idea.
1. https://onionbalance.readthedocs.org/en/latest/design.html#overview
--
Mike Perry
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
Hi there!
The first June release of metrics-lib/DescripTor is available:
https://dist.torproject.org/descriptor/1.8.0/
The theme of this release is to prepare more efficient descriptor
processing and some of the non-backwards compatible changes to come
in version 2.0.0.
Main changes:
Raw descriptor contents are now stored with less overhead in-memory;
parsing methods were streamlined and the class structure of the
implementation is improved.
Of course, all bugs surfacing during this work were removed,
for details see [1].
Please direct comments and questions to the metrics-team mailing list [2].
Cheers,
iwakeh
[1] https://gitweb.torproject.org/metrics-lib.git/plain/CHANGELOG.md?id=086b046…
[2] https://lists.torproject.org/cgi-bin/mailman/listinfo/metrics-team
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAEBCAAGBQJZN7m+AAoJEPeEx9Sa/gviTmYP/Amnhqp0YaH42mdqFxLjMTFx
D03g6cRqcG/qUEp1r9cakk4RMFpL5RQB1KH1VdBpq/Ffatt+wQ1m2Wxw6a6ikh67
TaZ5rU88r19Od/jE09zq67j8slBpUEsYJAzzH6GgJ8g/OpEGkbo7XNjFlLywyKSY
EkyLLR7fvei20MDtWCndhvkgj2CMVs5FHfBZAhC5ApBZD1fv6hRCzOitM1awoxs/
+/nIcd9U+m2dAUClCMFZHf+elW1rR1nq96oEIqLRRSaXaJWTl8+UauzTv088crK+
+4y/8/Mwt0UrnmAltkk1i/6q4baL3c2IcmL0NnZht8aGgZim4ukc9lih6Mk+EJME
USaoGIxKKZd49vmfcbL2MNgNPRFZN/8wymLOqSabqEMIeh9Mdljt3nZbMboCQ+jQ
1W0vN8l2dGYhA3DNNmEyRvf78zjYFZeYqAw6TQ2NroKVsUBvBwddTGM4wn3Z47Qo
CxdICiPIyI56vAI9n8HX4DIH6UAjRure7o600G+Tc8u4WJKGf5Fa7izPHN5z+yVH
u36hesVWABfU/IlKDsbTTa8FnHFyeW0O4sk7W8mhj1ntqVk1TtvdAmOyN3ICtURw
vE3jtHzYn3r95nRRNIGy93+J9+bcClyNKfsi9rQ7WK7YANK55dICQIN1ZNszXMMz
ftSnzHLpnJA7ffC+2xBC
=KmDm
-----END PGP SIGNATURE-----
I searched for the Snowflake bridge in Atlas, and couldn't find it. Its
fingerprint is 2B280B23E1107BB62ABFC40DDCC8824814F80A72. Its torrc is
stock "Last updated 9 October 2013 for Tor 0.2.5.2-alpha" except for
these settings:
ContactInfo David Fifield <dcf(a)torproject.org>
SOCKSPort 0
ORPort 9001
BridgeRelay 1
ExtORPort auto
ServerTransportPlugin snowflake exec /usr/local/bin/snowflake-server --acme-hostnames snowflake.bamsoftware.com --acme-email dcf(a)torproject.org --log /var/log/tor/snowflake-server.log
ServerTransportListenAddr snowflake 0.0.0.0:443
Its ORPort 9001 is blocked by the local firewall, because it is meant to
be only a Snowflake bridge, and not a vanilla bridge. (Most of the
default Tor Browser obfs4 bridges are configured the same way, with
their ORPort blocked.) There are these messages in the log (which I
exxpected):
[warn] Your server (37.218.242.151:9001) has not managed to confirm that its ORPort is reachable. Please check your firewalls, ports, address, /etc/hosts file, etc.
Why is the bridge not appearing in Atlas? I initially suspected
https://bugs.torproject.org/18050, whose changelog entry is:
- Check that both the ORPort and DirPort (if present) are reachable
before publishing a relay descriptor. Otherwise, relays publish a
descriptor with DirPort 0 when the DirPort reachability test takes
longer than the ORPort reachability test.
Closes bug #18050. Reported by "starlight", patch by "teor".
Bugfix on 0.1.0.1-rc, commit a1f1fa6ab on 27 Feb 2005.
But if it's the case that an unreachable ORPort causes descriptors not
to be uploaded, then why do the default obfs4 bridges appear in Atlas?
For example:
https://atlas.torproject.org/#details/D9C805C955CB124D188C0D44F271E9BE57DE2…https://atlas.torproject.org/#details/D3D4A456FCB5F301F092F6A49ED671B84B432…https://atlas.torproject.org/#details/11A3982C417AF37230F576006405BE5338F41…
Actually, now that I look at it, I notice some other default bridges are
not present in Atlas, for example the two from
https://bugs.torproject.org/21917, which went out in Tor Browser 6.5.2:
C8CBDB2464FC9804A69531437BCF2BE31FDD2EE4
0BAC39417268B96B9F514E7F63FA6FBA1A788955
What's going on and how can we fix it? You can find a list of default
bridge fingerprints here:
https://gitweb.torproject.org/builders/tor-browser-bundle.git/tree/Bundle-D…