I spent some time trying to clean up proposal 247 based on everyone's
comments, as well as based on my own thoughts. Please have a look if you
commented on the original proposal, and complain if I've not taken your
thoughts into account.
(Aaron: In particular, I made several tradeoffs in favor of performance
and DoS resistance that may be at odds with some of your suggestions,
but I think the end result is still OK after looking into the Sybil
rates and thinking about the adversary model in more detail. You may
disagree).
I've attached my updated version of the proposal inline in this mail,
but the canonical updated proposal is in my remote at:
https://gitweb.torproject.org/user/mikeperry/torspec.git/tree/proposals/247…
Here's a summary of the changes (which are also listed in Git):
* Try to make a coherent threat model and specify its assumptions
* Fold in my comments about using disjoint sets ("buckets") for the
third level guard.
* Make the parameter discussion subsection its own section, and include
tables with far more detail for the Sybil success rates.
* Put the rotation period in a separate subsection from the number of
guards
* Switch to using min(X,X) and max(X,X) for the distribution for the
second and third layer guard lifespans, respectively. Add a
subsection describing this distribution (3.2.3)
* Changed the default parameters based on these tables, and based on my own
intuition about Tor's performance properties.
* Move the load balancing, torrc, and other performance considerations to
their own section (Section 5).
* Move "3.2. Distinguishing new HS circuits from normal HS circuits" to
section 4.1.
* Fold in some of "3.3. Circuit nodes can now be linked to specific hidden
services" into 4.1. Some of it I just removed, though, because I did not
find it credible.
* Added Roger's concerns about guard linkability to Section 4.2.
* Added a denial of service subsection to Section 4.3.
================================
Filename: 247-hs-guard-discovery.txt
Title: Defending Against Guard Discovery Attacks using Vanguards
Author: George Kadianakis
Created: 2015-07-10
Status: Draft
0. Motivation
A guard discovery attack allow attackers to determine the guard
node of a Tor client. The hidden service rendezvous protocol
provides an attack vector for a guard discovery attack since anyone
can force an HS to construct a 3-hop circuit to a relay (#9001).
Following the guard discovery attack with a compromise and/or
coercion of the guard node can lead to the deanonymization of a
hidden service.
1. Overview
This document tries to make the above guard discovery + compromise
attack harder to launch. It introduces an optional configuration
option which makes the hidden service also pin the second and third
hops of its circuits for a longer duration.
With this new path selection, we force the adversary to perform a
Sybil attack and two compromise attacks before succeeding. This is
an improvement over the current state where the Sybil attack is
trivial to pull off, and only a single compromise attack is required.
With this new path selection, an attacker is forced to do a one or
more node compromise attacks before learning the guard node of a hidden
service. This increases the uncertainty of the attacker, since
compromise attacks are costly and potentially detectable, so an
attacker will have to think twice before beginning a chain of node
compromise attacks that he might not be able to complete.
1.1. Visuals
Here is how a hidden service rendezvous circuit currently looks like:
-> middle_1 -> middle_A
-> middle_2 -> middle_B
-> middle_3 -> middle_C
-> middle_4 -> middle_D
HS -> guard -> middle_5 -> middle_E -> Rendezvous Point
-> middle_6 -> middle_F
-> middle_7 -> middle_G
-> middle_8 -> middle_H
-> ... -> ...
-> middle_n -> middle_n
this proposal pins the two middles nodes to a much more restricted
set, as follows:
-> guard_3A_A
-> guard_2_A -> guard_3A_B
-> guard_3A_C -> Rendezvous Point
HS -> guard_1
-> guard_3B_D
-> guard_2_B -> guard_3B_E
-> guard_3B_F -> Rendezvous Point
Note that the third level guards are partitioned into buckets such that
they are only used with one specific second-level guard. In this way,
we ensure that even if an adversary is able to execute a Sybil attack
against the third layer, they only get to learn one of the second-layer
Guards, and not all of them. This prevents the adversary from gaining
the ability to take their pick of the weakest of the second-level
guards for further attack.
2. Design
This feature requires the HiddenServiceGuardDiscovery torrc option
to be enabled.
When a hidden service picks its guard nodes, it also picks two
additional sets of middle nodes `second_guard_set` and
`third_guard_set` of size NUM_SECOND_GUARDS and NUM_THIRD_GUARDS
respectively for each hidden service. These sets are unique to each
hidden service created by a single Tor client, and must be kept separate
and distinct.
When a hidden service needs to establish a circuit to an HSDir,
introduction point or a rendezvous point, it uses nodes from
`second_guard_set` as the second hop of the circuit and nodes from
that second hop's corresponding `third_guard_set` as third hops of
the circuit.
A hidden service rotates nodes from the 'second_guard_set' at a random
time between MIN_SECOND_GUARD_LIFETIME hours and
MAX_SECOND_GUARD_LIFETIME hours.
A hidden service rotates nodes from the 'third_guard_set' at a random
time between MIN_THIRD_GUARD_LIFETIME and MAX_THIRD_GUARD_LIFETIME
hours.
These extra guard nodes should be picked with the same path selection
procedure that is used for regular middle nodes (though see Section
5.1 for performance reasons to restrict this slightly).
Each node's rotation time is tracked independently, to avoid disclosing
the rotation times of the primary and second-level guards.
XXX how should proposal 241 ("Resisting guard-turnover attacks") be
applied here?
2.1. Security parameters
We set NUM_SECOND_GUARDS to 4 nodes and NUM_THIRD_GUARDS to 16 nodes (ie
four sets of four).
XXX: 3 and 12 might be another option here, in which case our rotation
period for the second guard position can be reduced to 15 days.
We set MIN_SECOND_GUARD_LIFETIME to 1 day, and
MAX_SECOND_GUARD_LIFETIME to 33 days, for an average rotation rate of
~11 days, using the min(X,X) distribution specified in Section 3.2.2.
We set MIN_THIRD_GUARD_LIFETIME to 1 hour, and MAX_THIRD_GUARD_LIFETIME
to 18 hours, for an average rotation rate of ~12 hours, using the max(X,X)
distribution specified in Section 3.2.2.
XXX make all the above consensus parameters? Yes. Very yes, especially
if we decide to change the primary guard lifespan.
See Section 3 for more analysis on these constants.
3. Rationale and Security Parameter Selection
3.1. Threat model, Assumptions, and Goals
Consider an adversary with the following powers:
- Can launch a Sybil guard discovery attack against any node of a
rendezvous circuit. The slower the rotation period of the node,
the longer the attack takes. Similarly, the higher the percentage
of the network is compromised, the faster the attack runs.
- Can compromise any node on the network, but this compromise takes
time and potentially even coercive action, and also carries risk
of discovery.
We also make the following assumptions about the types of attacks:
1. A Sybil attack is noisy. It will require either large amounts of traffic,
multiple test circuits, or both.
2. A Sybil attack against the second or first layer Guards will be
more noisy than a Sybil attack against the third layer guard, since the
second and first layer Sybil attack requires a timing side channel in
order to determine success, where as the Sybil success is almost
immediately obvious to third layer guard, since it will now be returned
as a rend point for circuits for the hidden service in question.
3. As soon as the adversary is confident they have won the Sybil attack,
an even more aggressive circuit building attack will allow them to
determine the next node very fast (an hour or less).
4. The adversary is strongly disincentivized from compromising nodes that
may prove useless, as node compromise is even more risky for the
adversary than a Sybil attack in terms of being noticed.
Given this threat model, our security parameters were selected so that
the first two layers of guards should be hard to attack using a Sybil
guard discovery attack and hence require a node compromise attack. Ideally,
we want the node compromise attacks to carry a non-negligible probability of
being useless to the adversary by the time they complete.
On the other hand, the outermost layer of guards should rotate fast enough to
_require_ a Sybil attack.
3.2. Parameter Tuning
3.2.1. Sybil rotation counts for a given number of Guards
The probability of Sybil success for Guard discovery can be modeled as
the probability of choosing 1 or more malicious middle nodes for a
sensitive circuit over some period of time.
P(At least 1 bad middle) = 1 - P(All Good Middles)
= 1 - P(One Good middle)^(num_middles)
= 1 - (1 - c/n)^(num_middles)
c/n is the adversary compromise percentage
In the case of Vanguards, num_middles is the number of Guards you rotate
through in a given time period. This is a function of the number of vanguards
in that position (v), as well as the number of rotations (r).
P(At least one bad middle) = 1 - (1 - c/n)^(v*r)
Here's detailed tables in terms of the number of rotations required for
a given Sybil success rate for certain number of guards.
1.0% Network Compromise:
Sybil Success One Two Three Four Five Six Eight Nine Ten Twelve Sixteen
10% 11 6 4 3 3 2 2 2 2 1 1
15% 17 9 6 5 4 3 3 2 2 2 2
25% 29 15 10 8 6 5 4 4 3 3 2
50% 69 35 23 18 14 12 9 8 7 6 5
60% 92 46 31 23 19 16 12 11 10 8 6
75% 138 69 46 35 28 23 18 16 14 12 9
85% 189 95 63 48 38 32 24 21 19 16 12
90% 230 115 77 58 46 39 29 26 23 20 15
95% 299 150 100 75 60 50 38 34 30 25 19
99% 459 230 153 115 92 77 58 51 46 39 29
5.0% Network Compromise:
Sybil Success One Two Three Four Five Six Eight Nine Ten Twelve Sixteen
10% 3 2 1 1 1 1 1 1 1 1 1
15% 4 2 2 1 1 1 1 1 1 1 1
25% 6 3 2 2 2 1 1 1 1 1 1
50% 14 7 5 4 3 3 2 2 2 2 1
60% 18 9 6 5 4 3 3 2 2 2 2
75% 28 14 10 7 6 5 4 4 3 3 2
85% 37 19 13 10 8 7 5 5 4 4 3
90% 45 23 15 12 9 8 6 5 5 4 3
95% 59 30 20 15 12 10 8 7 6 5 4
99% 90 45 30 23 18 15 12 10 9 8 6
10.0% Network Compromise:
Sybil Success One Two Three Four Five Six Eight Nine Ten Twelve Sixteen
10% 2 1 1 1 1 1 1 1 1 1 1
15% 2 1 1 1 1 1 1 1 1 1 1
25% 3 2 1 1 1 1 1 1 1 1 1
50% 7 4 3 2 2 2 1 1 1 1 1
60% 9 5 3 3 2 2 2 1 1 1 1
75% 14 7 5 4 3 3 2 2 2 2 1
85% 19 10 7 5 4 4 3 3 2 2 2
90% 22 11 8 6 5 4 3 3 3 2 2
95% 29 15 10 8 6 5 4 4 3 3 2
99% 44 22 15 11 9 8 6 5 5 4 3
The rotation counts in these tables were generated with:
def count_rotations(c, v, success):
r = 0
while 1-math.pow((1-c), v*r) < success: r += 1
return r
3.2.2. Rotation Period
As specified in Section 3.1, the primary driving force for the third
layer selection was to ensure that these nodes rotate fast enough that
it is not worth trying to compromise them, because it is unlikely for
compromise to succeed and yield useful information before the nodes stop
being used. For this reason we chose 1 to 18 hours, with a weighted
distribution (Section 3.2.3) causing the expected average to be 12 hours.
From the table in Section 3.2.1, it can be seen that this means that the
Sybil attack will complete with near-certainty (99%) in 29*12 hours
(14.5 days) for the 1% adversary, 3 days for the 5% adversary, and
1.5 days for the 10% adversary.
Since rotation of each node happens independently, the distribution of
when the adversary expects to win this Sybil attack in order to discover
the next node up is uniform. This means that on average, the adversary
should expect that half of the rotation period of the next node is already
over by the time that they win the Sybil.
With this fact, we choose our range and distribution for the second
layer rotation to be short enough to cause the adversary to risk
compromising nodes that are useless, yet long enough to require a
Sybil attack to be noticeable in terms of client activity. For this
reason, we choose a minimum second-layer guard lifetime of 1 day,
since this gives the adversary a minimum expected value of 12 hours for
during which they can compromise a guard before it might be rotated.
If the total expected rotation rate is 11 days, then the adversary can
expect overall to have 5.5 days remaining after completing their Sybil
attack before a second-layer guard rotates away.
3.2.3. Rotation distribution
In order to skew the distribution of the third layer guard towards
higher values, we use max(X,X) for the distribution, where X is a
random variable that takes on values from the uniform distribution.
In order to skew the distribution of the second layer guard towards
low values (to increase the risk of compromising useless nodes) we
skew the distribution towards lower values, using min(X,X).
Here's a table of expectation (arithmetic means) for relevant
ranges of X (sampled from 0..N).
The current choice for second-layer guards is noted with **, and
the current choice for third-layer guards is noted with ***.
Range Min(X,X) Max(X,X)
10 2.85 6.15
11 3.18 6.82
12 3.51 7.49
13 3.85 8.15
14 4.18 8.82
15 4.51 9.49
16 4.84 10.16
17 5.18 10.82***
18 5.51 11.49
19 5.84 12.16
20 6.18 12.82
21 6.51 13.49
22 6.84 14.16
23 7.17 14.83
24 7.51 15.49
25 7.84 16.16
26 8.17 16.83
27 8.51 17.49
28 8.84 18.16
29 9.17 18.83
30 9.51 19.49
31 9.84 20.16
32 10.17** 20.83
33 10.51 21.49
34 10.84 22.16
35 11.17 22.83
36 11.50 23.50
37 11.84 24.16
38 12.17 24.83
39 12.50 25.50
4. Security concerns and mitigations
4.1. Mitigating fingerprinting of new HS circuits
By pinning the middle nodes of rendezvous circuits, we make it
easier for all hops of the circuit to detect that they are part of a
special hidden service circuit with varying degrees of certainty.
The Guard node is able to recognize a Vanguard client with a high
degree of certainty because it will observe a client IP creating the
overwhelming majority of its circuits to just a few middle nodes in
any given 10-18 day time period.
The middle nodes will be able to tell with a variable certainty that
depends on both its traffic volume and upon the popularity of the
service, because they will see a large number of circuits that tend to
pick the same Guard and Exit.
The final nodes will be able to tell with a similar level certainty
that depends on their capacity and the service popularity, because they
will see a lot of rend handshakes that all tend to have the same second
hop.
The most serious of these is the Guard fingerprinting issue. When
proposal xxx-padding-negotiation is implemented, services that enable
this feature should use those padding primitives to create fake circuits
to random middle nodes that are not their guards, in an attempt to look
more like a client.
Additionally, if Tor Browser implements "virtual circuits" based on
SOCKS username+password isolation in order to enforce the re-use of
paths when SOCKS username+passwords are re-used, then the number of
middle nodes in use during a typical user's browsing session will be
proportional to the number of sites they are viewing at any one time.
This is likely to be much lower than one new middle node every ten
minutes, and for some users, may be close to the number of Vanguards
we're considering.
This same reasoning is also an argument for increasing the number of
second-level guards beyond just two, as it will spread the hidden
service's traffic over a wider set of middle nodes, making it both
easier to cover, and behave closer to a client using SOCKS virtual
circuit isolation.
4.2. Hidden service linkability
Multiple hidden services on the same Tor instance should use separate
second and third level guard sets, otherwise an adversary is trivially
able to determine that the two hidden services are co-located by
inspecting their current chosen rend point nodes.
Unfortunately, if the adversary is still able to determine that two or
more hidden services are run on the same Tor instance through some other
means, then they are able to take advantage of this fact to execute a
Sybil attack more effectively, since there will now be an extra set of
guard nodes for each hidden service in use.
For this reason, if Vanguards are enabled, and more than one hidden
service is configured, the user should be advised to ensure that they do
not accidentally leak that the two hidden services are from the same Tor
instance.
4.3. Denial of service
Since it will be fairly trivial for the adversary to enumerate the
current set of rend nodes for a hidden service, denial of service
becomes a serious risk for Vanguard users.
For this reason, it is important to support a large number of
third-level guards, to increase the amount of resources required to
bring a hidden service offline by DoSing just a few Tor nodes.
5. Performance considerations
The switch to a restricted set of nodes will very likely cause
significant performance issues, especially for high-traffic hidden
services. If any of the nodes they select happen to be temporarily
overloaded, performance will suffer dramatically until the next
rotation period.
5.1. Load Balancing
Since the second and third level "guards" are chosen from the set of all
nodes eligible for use in the "middle" hop (as per hidden services
today), this proposal should not significantly affect the long-term load
on various classes of the Tor network, and should not require any
changes to either the node weight equations, or the bandwidth
authorities.
Unfortunately, transient load is another matter, as mentioned
previously. It is very likely that this scheme will increase instances
of transient overload at nodes selected by high-traffic hidden services.
One option to reduce the impact of this transient overload is to
restrict the set of middle nodes that we chose from to some percentage
of the fastest middle-capable relays in the network. This may have
some impact on load balancing, but since the total volume of hidden
service traffic is low, it may be unlikely to matter.
5.2. Circuit build timeout
The adaptive circuit build timeout mechanism in Tor is what corrects
for instances of transient node overload right now.
The timeout will naturally tend to select the current fastest and
least-loaded paths even through this set of restricted routes, but it
may fail to behave correctly if there are a very small set of nodes in
each guard set, as it is based upon assumptions about the current path
selection algorithm, and it may need to be tuned specifically for
Vanguards, especially if the set of possible routes is small.
5.3. OnionBalance
At first glance, it seems that this scheme makes multi-homed hidden
services such as OnionBalance[1] even more important for high-traffic
hidden services.
Unfortunately, if it is equally damaging to the user for any of their
multi-homed hidden service locations to be discovered, then OnionBalance
is strictly equivalent to simply increasing the number of second-level
guard nodes in use, because an active adversary can perform simultaneous
Sybil attacks against all of the rend points offered by the multi-homed
OnionBalance introduction points.
5.4. Default vs optional behavior
We suggest this torrc option to be optional because it changes path
selection in a way that may seriously impact hidden service performance,
especially for high traffic services that happen to pick slow guard
nodes.
However, by having this setting be disabled by default, we make hidden
services who use it stand out a lot. For this reason, we should in fact
enable this feature globally, but only after we verify its viability for
high-traffic hidden services, and ensure that it is free of second-order
load balancing effects.
Even after that point, until Single Onion Services are implemented,
there will likely still be classes of very high traffic hidden services
for whom some degree of location anonymity is desired, but for which
performance is much more important than the benefit of Vanguards, so there
should always remain a way to turn this option off.
6. Future directions
Here are some more ideas for improvements that should be done sooner
or later:
- Maybe we should make the size and rotation period of
secondary/third guard sets to be configurable by the user.
- To make it harder for an adversary, a hidden service MAY extend
the path length of its circuits by an additional static hop. This
forces the adversary to use another coercion attack to walk the
chain up to the hidden service.
7. Acknowledgments
Thanks to Aaron Johnson, John Brooks, Mike Perry and everyone else
who helped with this idea.
1. https://onionbalance.readthedocs.org/en/latest/design.html#overview
--
Mike Perry
Hi Karsten,
I was surprised that ornetradar did not send a single email for
yesterday's new relays.
After looking into it, it turned out it is an onionoo problem.
"relays_published":"2016-05-13 12:00:00"
https://onionoo.torproject.org/details?limit=4
regards,
nusenu
Hello.
How it's currently done - The Tor Browser Bundle is dynamically linked
against glibc.
Security problem - The Tor Browser Bundle has the risk of information
about the host system's library ecosystem leaking out onto the
network.
Portability problem - The Tor Browser Bundle can't be run on systems
that don't use glibc, making it unusable due to different syscalls.
Solution proposed - Static link the Tor Browser Bundle with musl
libc.[1] It is a simple and fast libc implementation that was
especially crafted for static linking. This would solve both security
and portability issues.
What is Tor developers' opinion about this? I personally don't see any
drawbacks and would be interested in discussing this further.
Sincerely,
Daniel
[1] https://www.musl-libc.org/
On 27/04/16 22:31, grarpamp wrote:
> On 4/25/16, Tim Wilson-Brown - teor <teor2345(a)gmail.com> wrote:
>>
>>> On 22 Apr 2016, at 17:03, grarpamp <grarpamp(a)gmail.com> wrote:
>>>
>>> FYI: The onioncat folks are interested in collaborating
>>> with tor folks regarding prop224.
>>>
>>> https://gitweb.torproject.org/torspec.git/tree/proposals/224-rend-spec-ng.t…
>>
>> I'm interested in what kind of collaboration onioncat would like to do on
>> prop224, next-generation hidden services.
>> It would be great to work this out in the next few weeks, as we're coding
>> parts of the proposal right now.
>
> Yep :) And I know Bernhard was hoping to get in touch with Roger
> on this before long.
>
> Basically, prop224 HS being wider than 80 bits will break onioncat's
> current HS onion <---> IPv6 addressing mechanism.
>
> They're looking at various backward compatibility options, as well
> as possibly making side use of the HSDir DHT, or even integrating
> more directly with the tor client.
>
Just FYI, I recently migrated all of I2P's spec proposals to the
website, and came across a seven-year-old proposal that Bernhard wrote
about improving I2P support in GarliCat:
https://geti2p.net/spec/proposals/105-garlicat-name-translation
I don't know how well it has aged, but given that Tor is now facing the
same issues that I2P has, perhaps it can be of some use if resurrected
from the dead :)
>
>> But the tor-onions mailing list is to discuss the technical details running
>> onion services.
>
> Readers of tor-onions / newbies may have been unfamiliar with
> onioncat. It's a way to get non-TCP between TorHS onions, thus in
> the thread "Hidden datagram service".
>
> I think there are a nontrivial number of users interested in, and
> using, non-strictly-TCP transport over an IPv6 tunnel interface.
> For example, look at users of CJDNS...
>
> For which we should try to continue a way, in v2, to do that over
> anonymous overlay network Tor / I2P.
>
There is already some work on doing this in I2P:
https://github.com/majestrate/i2p-tools/tree/master/i2tunhttps://github.com/majestrate/i2p-tools/tree/master/pyi2tun
I2P also natively supports non-TCP protocols if that helps (only
datagrams implemented thus far).
str4d
Hello people,
this is an experimental mail meant to address legitimate usability concerns
with the size of onion addresses after proposal 224 gets implemented. It's
meant for discussion and it's far from a full blown proposal.
Anyway, after prop224 gets implemented, we will go from 16-character onion
addresses to 52-character onion addresses. See here for more details:
https://gitweb.torproject.org/torspec.git/tree/proposals/224-rend-spec-ng.t…
This happens because we want the onion address to be a real public key, and not
the truncated hash of a public key as it is now. We want that so that we can do
fun cryptography with that public key. Specifically, we want to do key blinding
as specified here:
https://gitweb.torproject.org/torspec.git/tree/proposals/224-rend-spec-ng.t…
As I understand it the key blinding scheme is trying to achieve the following properties:
a) Every HS has a permanent identity onion address
b) Clients use an ephemeral address to fetch descriptors from HSDir
c) Knowing the ephemeral address never reveals the permanent onion address
c) Descriptors are encrypted and can only be read by clients that know the identity onion key
d) Descriptors are signed and verifiable by clients who know the identity onion key
e) Descriptors are also verifiable in a weaker manner by HSDirs who know the ephemeral address
In this email I'm going to sketch a scheme that has all above properties except from (e).
The suggested scheme is basically the current HSDir protocol, but with clients
using ephemeral addresses for fetching HS descriptors. Also, we truncate onion
address hashes to something larger than 80bits.
Here is a sketch of the scheme:
------
Hidden service Alice has a long-term public identity key: A
Hidden service Alice has a long-term private identity key: a
The onion address of Alice, as in the current scheme, is a truncated H(A).
So let's say: onion_address = H(A) truncated to 128 bits.
The full public key A is contained in Alice's descriptor as it's currently the case.
When Alice wants to publish a descriptor she computes an ephemeral address
based on the current time period 't': ephemeral_address = H(t || onion_address)
Legitimate clients who want to fetch the descriptor also do the same, since
they know both 't' and 'onion_address'.
Descriptors are encrypted using a key derived from the onion_address. Hence,
only clients that know the onion_address can decrypt it.
Descriptors are signed using the long-term private key of the hidden service,
and can be verified by clients who manage to decrypt the descriptor.
---
Assuming the above is correct and makes sense (need more brain), it should
maintain all the security properties above except from (e).
So basically in this scheme, HSDirs won't be able to verify the signatures of
received descriptors.
The obvious question here is, is this a problem?
IIUC, having the HSDirs verify those signatures does not offer any additional
security, except from making sure that the descriptor signature was actually
created using a legitimate ed25519 key. Other than that, I don't see it
offering much.
So, what does this additional HSDir verification offer? It seems like a weak
way to ensure that no garbage is uploaded on the HSDir hash ring. However, any
reasonable attacker will put their garbage in a descriptor and sign it with a
random ed25519 key, and it will trivially pass the HSDir validation.
So do we actually care about this property enough to introduce huge onion
addresses to the system?
Please discuss and poke holes at the above system.
Cheers!
I was inspired by onioncat to write a twisted python implementation. Onionvpn doesn't have as many features as onioncat. I've successfully tested that onionvpn and onioncat can talk to each other and play nice. Both onionvpn and onioncat implement a virtual public network. Anyone can send packets to you if they know your onion address or ipv6 address... however injection attacks are unlikely since the attacker cannot know the contents of your traffic without compromising the tor process managing the onion service.
I've also tested with mosh; that is, you can use mosh which only works with ipv4 over an ipv4-to-ipv6 tunnel over onionvpn/onioncat. Like this:
mosh-client -> udp/ipv4 -> ipv6 -> tun device -> tcp-to-tor -> onion service decodes ipv6 to tun device -> ipv6 -> udp/ipv4 -> mosh-server
https://github.com/david415/onionvpn
If an onionvpn/onioncat operator were to NAT the onion ipv6 traffic to the Internet then that host essentially becomes a special IPv6 exit node for the tor network. The same can be done for IPv4. Obviously operating such an exit node might be risky due to the potential for abuse... however don't you just love the idea of being about to use low-level network scanners over tor? I wonder if Open Observatory of Network Interference would be interested in this.
david
Hi, all!
I've just turned on some code [1] that makes all of the "bug" warnings
that occur during the unit tests get logged to console. Previously,
all warnings from the unit tests were off by default.
Now there are a lot of warnings from the unit tests! For each one, we
should either:
1) Decide that the warning message wouldn't actually reflect a bug
in Tor, and change it.
2) Decide that there's a bug in the unit tests (or elsewhere) and fix it.
3) Decide that _in this case_, the unit tests are correctly testing
code that *would* indicate a real bug in Tor. In this case, we should
use the functions in log_test_helpers.c to temporarily suppress the
log messages, and make sure that this message (and only this message!)
is logged.
This is going to be a fair bit of work, but with any luck, we can
actually fix it all before the next 029 alpha comes out, and improve
our confidence in our unit tests.
[1] https://trac.torproject.org/projects/tor/ticket/20042
cheers,
--
Nick
Great to see the community making progress with post-quantum handshakes.
But I'm wondering what's going to happen with Proposals #269 and #270. #269 seems to allow any post-quantum algorithm to be used in the hybrid with NTRUEncrypt and NewHope being specified as two options (presumably other options like SIDH or Mceliece could also be used). #270 is more specific, a hybrid of x25519 and NewHope. NewHope seems to be in the lead but do we want to rule others - so a flexible proposal like #269 might be better. #269 and #270 look as if they would not be compatible with each other so what's the process for deciding between them?
Also see https://eprint.iacr.org/2016/717.pdf, a comparison of attacks on NTRU. It doesn't break NTRU but it does break (some versions of) YASHE which is a FHE scheme based on NTRUEncrypt. In the conclusion it recommends transforming NTRU-like algorithms into ring-LWE like algorithms, and dismissing the former since they are known to be weaker. I still think a flexible protocol rather than all eggs in the NewHope basket is a Good Thing.
-- lukep
Hi Karsten,
some time ago I reported onionoo's poor reverse DNS results
https://trac.torproject.org/projects/tor/ticket/18342
and it didn't change since then.
As of 2016-07-02 08:00 (tpo instance)
3144 out of 8473 (~37%) still have no reverse DNS result (=IP address).
Do you have an idea whether this will improve sometime before 2016-09-01?
thanks,
nusenu
btw: Thanks for working around the maxmind AS problem with reverting to
the May version.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hi devs,
if you're not using CollecTor's sanitized bridge descriptors for
anything, you can safely ignore this message.
As you may know, there are currently two bridge directory authorities
in the Tor network, Tonga and Bifroest, and Tonga will go away some
time this weekend.
You can fetch sanitized versions of Tonga's bridge descriptors here:
https://collector.torproject.org/recent/bridge-descriptors/
We're also now ready to provide sanitized versions of Bifroest's
bridge descriptors. The only differences to Tonga's descriptors are:
- Statuses will have Bifroest's fingerprint in their file name rather
than Tonga's. Example:
https://people.torproject.org/~karsten/volatile/20160829-164232-1D8F3A91C37…
- There will be two "concurrent" statuses published by Tonga and
Bifroest this week, containing two distinct sets of bridges.
I'm holding back Bifroest's descriptors from CollecTor for another 48
hours or so to give you time to prepare for this change. After all,
you might have written your code under the assumption that there's
just a single bridge authority, and the above change might confuse
your code and break it. Everything will be on CollecTor in 48 hours
from now.
All the best,
Karsten
-----BEGIN PGP SIGNATURE-----
Comment: GPGTools - http://gpgtools.org
iQEcBAEBAgAGBQJXxHytAAoJEC3ESO/4X7XBMfAIAKXADVnMGK5rYmbZN5ymF2M8
IbFc/6VWV+CoUdss868Dp5/Y3huE6SgFXCe3A+R+jCg5Od8rIiWKEfNvVeLEXpU1
DtuYNAq3vlhVhHTOdWmCz2uRVeWNddb12RDa+H55QtGHC4JC8HP9PRSN+lLj5fBr
YBuKvLAWfMOsfdAqVJwHmmhtORPag7wL2NWmmk/y4j9JCzQ//NmUv3CMdIQm+j98
pM5jwI6ogiec2f/Nbij3eSJHCAhHKuL5TKR6E+//TE/s4ZGBKDPxkw+w/PdQrViC
A3RgDVm7sqrjKkm2jWHVEuTa36QdzBgpV6vH1tEmTrHTz7Epre3MVFwKR607LBM=
=gXT3
-----END PGP SIGNATURE-----