Hello,
Thank you again for your valuable comments and efforts for improving!
I have added the proposal here: https://github.com/frochet/wf_proposal. And, btw, the slides from the talk might help to understand the proposal (for anyone that hesitates to dive into it: You can browse the slide at http://ndouuqkqdsd6v56h.onion/and get the big picture. Especially from slides 14.1, 14.2, .3, .4 and .5).
On 15/01/18 03:26, teor wrote:
On 13 Jan 2018, at 01:17, Florentin Rochet florentin.rochet@uclouvain.be wrote:
Hello,
Thank you for your helpful review, teor.
I updated the proposal from most of your comments (see attached .txt) and I respond inline to add some precisions relative to a few of your questions.
Btw, I've mirrored my private repo to github https://github.com/frochet/Waterfilling, such that you have the proper commit history.
Thanks!
I replied below in context.
On 2018-01-11 14:47, teor wrote:
Hi Florentin,
I have copied your proposal below, so I can respond to it inline. ...
Hence, in addition to select paths in a probabilistic manner, the path selection algorithm is responsible for balancing the network, that is, making sure that enough bandwidth is made available in all the positions. The current balance of the network is decided by the bandwidth-weights (see dir-spec.txt section 3.8.3. or/and the Waterfilling PETS2017 paper https://petsymposium.org/2017/papers/issue2/paper05-2017-2-source.pdf). This balance does not achieve maximum diversity in end-positions of user paths: the same network throughput could be achieved by decreasing the use of high-bandwidth relays and increasing the use of lower-bandwidth relays in the guard position, instead of using these relays in a way that is just proportional to their bandwidth.
When you say "bandwidth", it's not clear whether you mean consensus weight (measured bandwidth) or advertised bandwidth (bandwidth capacity). They're not the same.
I'm going to assume consensus weight from now on. Please fix all uses of bandwidth in the rest of the proposal.
Yes, we mean consensus weight :) I did s/bandwidth/consensus weight within the proposal
In some cases, you seem to expect the consensus weight to be the available bandwidth (or advertised bandwidth) of the relay. I'll try to find them, but you should check as well.
Ok, I will look into it. If you see some, let me know :)
...
<skip> > Compute a "pivot" N and the weight Wgg_i assigned to > relay_i in such a way that: > > (a) Wgg_i * BW_i == Wgg_i+1 * BW_i+1 forall i in [1, N] subscripts in brackets, otherwise it's ambiguous:
Wgg_(i+1) * BW_(i+1) (b) Wgg_i == 1 forall i in [N+1, K] (c) sum_{i=1}^{K} Wgg_i*BW_i = Wgg*G (Wgg is provided by Tor)
These equations are under-specified, because they allow solutions with: Wgg*G > 0 Wgg_1 == 0 That is, no guard selections for high-bandwidth relays.
This is a really important point:
Your equations allow the largest guard to have zero guard weighting and zero guard bandwidth. They should require it to have at least as much guard bandwidth as the water level.
From the descriptions, I think the missing condition is: Wgg_N * BW_N >= Wgg_(N+1) * BW_(N+1)
Yes, this can be added. But I think that this condition is redundant, since BW_i are sorted in decreasing order.
No, it's not, because it's a condition on the *guard-weighted* consensus weights. The sorting is a condition on the *original* consensus weights.
I have added the condition, thanks :)
It might help to provide a procedural algorithm for assigning bandwidth, as well as equations. It would resolve some ambiguity.
Not sure to understand what 'bandwidth' signifies here. I can provide a procedural algorithm to compute the waterfilling bandwidth-weights, if this is what you meant?
Also, Wgg is provided by the Tor directory authorities based on consensus weights from the bandwidth authorities.
And what happens to any remainder in the calculations?
This is a very good question. Currently in my implementation, I ignore the remainder. This is negligible for large network but can be weird for small one (of a few relays).
A possible solution would be to use floating precision for consensus weights.
No, this is not a good solution. It simply passes the issue to the floating-point implementation. And the result will depend on the libraries, processor, and compiler.
For this reason, we work very hard to use as little floating point as possible in Tor. We really don't want to use it when generating the consensus, because mismatches will break consensus.
Instead, I suggest you apply this missing condition:
From the descriptions, I think the missing condition is: Wgg_N * BW_N >= Wgg_(N+1) * BW_(N+1)
And then allocate the remainder to the highest-bandwidth relays, perhaps in proportion to their consensus weight. Please allocate bandwidth to equally weighted relays using a stable sort that is the same on all directory authorities. (The existing relay sorting code might already do this for you.)
I've added the condition and also added a section for implementation note with your comments. Thanks for pointing this out!
(This is most important for small, low bandwidth test networks.)
For example, if: G = 10 Wgg = 0.6 BW_i = 6, 2, 2 What are the final weighted bandwidths? 2, 2, 2?
From my note, my current implementation would crash if the water level reaches the smallest relay. Since it was prototype code, I didn't mind to think about it, and I let it that way.
We would need a solution for this crash as part of the proposal.
Added in implementation note.
I think that below a fixed size of the network, it does not make sense to use this proposal. In this example, the remainder accounts for a large part of the network capacity and would just be wasted.
We will test this in small networks, and it must work. I suggest you allocate the remainder to high-bandwidth relays.
Fine with me :) Added in implementation note.
...
<skip> > 2.1 Going further by tweaking original bandwidth-weights computation > > As explained above, our Waterfilling equations are based on: 1) the > Wgg weight computed by Tor 2) the assumption that the bandwidth > available in exit is scarce, i.e., it is lower than the one > available for guard (and middle) positions. > > The second point is satisfied most of the time in Tor, and we do take > it for granted here. > > We, however, observe that Waterfilling could be made even more > effective by applying a minor change in the way Tor computes the > Wgg. For the moment, Tor computes Wgg in such a way that the same > bandwidth is allocated to the guard and middle positions. As a > result, both positions are in excess compared to the exit position. > > The water level could be decreased and, as a result, the uniformity > of the guard selection process could be improved, by computing Wgg > in a way that allocates the same bandwidth to the guard and exit > positions, putting the middle position as the only position in > excess. No, this would slow down Guards, and everything that isn't an exit circuit: * directory fetches (3% of total bandwidth to guard position)
If we reduce guard weights on large relays, does this slow down Tor bootstrap? (Because smaller relays with higher latency get more micro descriptor requests.)
Does it slow down directory fetches in general?
Assuming smallest guards have an higher latency in average, this could indeed slow down directory fetches and then, maybe it would slow down Tor bootstrapping. I guess that measuring this with Shadow is possible, as well as collecting statistics on a small set of deployed Tor clients.
- onion services (rend is 4% of total bandwidth to guard and middle)
- HSDir is unweighted, and we don't know how much bandwidth it uses
- FallbackDir is unweighted, but mostly Guards, and small
That's difficult to predict, I cannot be sure if it is better or worse for that type of traffic since internal circuits use at least 2 middle relays + the guard and sometimes, even not the guard. Hence we might also think that pushing a bit more to the middle position could be a good thing to do. Moreover, middle relays are unstable and often at the edge of the internet, while guard are stable and most of them within the core of the internet. Hence, a little more of them within the middle position *could* be a good thing, especially if it makes entry's selection probability more uniform. Anyway, I don't have any proof to assert this, as well that I don't have any proof to assert that this optimization could be bad. What I got, is that, for exit circuits, it does not slow down anything.
This optimization is not mandatory, and could also be enabled/disabled at will by the directory auths.
We show in the performance section of the Waterfilling paper that scarcity on two positions does not reduce performance compared to vanilla bandwidth-weights.
What about guards that have low consensus weight due to latency, rather than available bandwidth?
I think this could also cause you huge latency issues as you push more bandwidth away from fast relays. I'm not sure if shadow captures this accurately.
If it happens that any bandwidth is pushed away from fast relays within the entry position and make the entry position slower, at average, then it will make the middle position faster (because it got that bandwidth pushed away). Since the latency of your traffic flow just care about the global latency of the circuit, this will not appear to be slower or faster, on average. This is exactly what we observe in Shadow, and yes, it captures latency accurately. At least, better than any other simulator.
I am also concerned that this proposal may overload small guards and underload large guards. Because it does not consider advertised bandwidth.
I've added this concern within the 'unanswered questions' section. This proposal assumes relay measurement are reliable (consensus weight). If this is not the case, then strange things might happen. In our Shadow results, we have seen that the tails of the CDFs for measurements were a little bit "longer", meaning that the few worst circuits (3%) that we can obtain with the current path selection were a little bit worse with Waterfilling on a loaded network (no change on a light network). And, this fraction of worse circuits was even reduced (less than 1%) when we applied Section 2.1 of the proposal (moving more bandwidth to the middle position).
How does the current path selection considers advertised bandwidth? (Apart bwauths when measuring relays?).
For example, some guards are weighted at 10x their advertised bandwidth. Others are weighted at 0.1x their advertised bandwidth.
There is also a geographical bias to guard bandwidth: https://atlas.torproject.org/#map_consensus_weight_to_bandwidth/flag:Guard
Yep, Thanks for the link. Atlas becomes nicer :) For what it worth, a possible explanation for this bias would be to consider how big ISPs route internal traffics to their own AS: most of internal links are good 10GBits optic fiber while the connections to the outside world is mostly through smaller links. Bwauths can be tricked to believe that some relays have larger capacity because they have build circuits with relays internally connected to strong 10 Gbits. I don't know how the bwauths account for such thing? But, it is an another topic of discussion.
Does shadow capture the consensus weight to advertised bandwidth ratio? (That is, the actual load on the relay.) Or the advertised bandwidth? (The approximate actual capacity of the relay.)
Well, Shadow uses both. See doc of function https://github.com/shadow/shadow-plugin-tor/blob/master/tools/generate.py#L8... used to generate topology and set up link speed from the relay's advertised bandwidth. Then, for the path selection, Shadow uses the measured bandwidth from torflow plugins or from a static v3bw file.
I think this proposal will actually be good for the network, because some large consensus weight relays are overloaded. And giving them more middle weight will reduce their overall load. (Middles get less load than guards.)
But I would like others to think through this, too.
Cool :) And regarding the main point of this proposal? Removing top-guard threat anonymity and hardening path compromise? Do you think that the method presented here is valuable? And, how this 'weight' compared to performance concerns?
3 Security implications
An analysis of the security impact of the Waterfilling proposal is made in Section 6 of the Waterfilling paper. It studies the expectation of the number of relays that an adversary needs to control in order to mount a successful correlation attack at a given time, as well as an analysis of the evolution of the time until first path compromise, based on TorPS.
Given that the distribution produced by Waterfilling is closer to the uniform distribution, the use of Waterfilling increases the expectation of the number of relays that an adversary needs to compromise in order mount a successful correlation attack. Concretely, and based on real data from 2015, this expectation increases by approximately 150 relays.
What percentage is this?
~ 25 %
<skip>
I'm sorry, I can't read this, there are build files everywhere. I can't even find which files were changed. Can you provide clean commits in a fork, based on the Tor master branch?
Also, I think you will need a new consensus method for this change. We don't use consensus parameters or torrc options to modify the content of the consensus, we use consensus methods.
Sorry about this. I've updated the repo with a proper commit history based on my fork. The code gives an idea about the easiness to plug-in this method to the current path selection.
I tried reviewing this code, but it's hard. The changes are scattered through a lot of different commits.
Usually, we group changes into commits by topic or function area. And we use the commit title to say what the commit changes.
Then, when we fix a bug, we use "git commit --fixup" so we can see when the bug was fixed.
I will pay attention to that, thanks for the advice.
For example, I have to look at 20 commits to find out if this bug in the array size for commit e81a55ea has been fixed:
- if (exits) {
if (search_pivot_and_compute_wfbw_weights_(exits, bwweights->wee, 0,
guards->num_used) < 0)
This function is also missing a case for guardsexits.
It appears afterwards, in another commit. Yep, sorry for the mess.
<skip>
You skipped this, but it's really important for the implementation:
On 12 Jan 2018, at 00:47, teor teor2345@gmail.com wrote:
Also, I think you will need a new consensus method for this change. We don't use consensus parameters or torrc options to modify the content of the consensus, we use consensus methods.
Someone who understands consensus methods will need to revise your proposal and code before it is merged.
Added in the section "implementation note".
Here is how it works:
Every hour, directory authorities vote on a new consensus. Once the votes are collected, the dirauths produce a deterministic network consensus document containing information about each relay, including the waterfilling bandwidth-weights produced from the equations described above. e.g.:
...(more relays) r relayguard34 PPTH75+WkHl1GGw07DRE/S+JNdo 1970-01-01 00:02:37 51.254.112.52 9111 9112 m lp08MLTivsSZPhpZQy88i8NPeBNx10tPKBpHZsM3gYM s Fast Guard HSDir Running Stable V2Dir Valid v Tor 0.2.6.7 w Bandwidth=10200 wfbw wgg=8029 wmg=1971. r 4uthority3 PYnzXGQ+67m0WtO64qtJkwsUzME 1970-01-01 00:02:33 11.0.5.71 9111 9112 m d8G2/8UQzAN3a9DixCtmaivhfUFTvlFKAxCAV1xHVKk s Authority Fast Guard HSDir Running Stable V2Dir Valid v Tor 0.2.6.7 w Bandwidth=1890 wfbw wgg=10000 wmg=0. ...(more relays)
In this example, we see two relays having the Guard flag and their new waterfilling bandwidth allocation given on the lines starting with wfbw.
These are redundant, and could significantly expand the size of the consensus, and consensus diffs. (Consensus weights are one of the largest contributors to consensus diff size.)
True, each time the consensus weights are modified, those waterfilling weights need to be recomputed. It adds one line per guard, which is about 2~3% of the size of the full consensus. Is such a thing a problem?
Changing bandwidths in each consensus makes consensus diffs *much* bigger.
See this proposal for details and an analysis: https://gitweb.torproject.org/torspec.git/tree/proposals/276-lower-bw-granul...
Ok. If I understand correctly, any bandwidth change would need a full re-calculation and a broadcast of the new weights (1 per guard) in the next consensus diff. I didn't imagine that it could be a problem. But if consensus diff size matters, then moving the weight calculation to the client code would fix it.
The same issue applies to any bandwidth figures. Would it be sufficient to publish a small ratio instead?
For example, you could publish lines like:
w Bandwidth=100000 wfratio=1
or
w Bandwidth=100000 wfratio=3
And have clients calculate wfbw as follows:
wfbw = Bandwidth * 2^-wfratio
This would be much smaller, much more compressible, and would also solve some of your other issues around remainders, and also equal-weight guards and stable sorts.
You would need to use equations something like this:
Compute a "pivot" N and a weight ratio Wggr_i assigned to relay_i in such a way that:
Each high-bandwidth guard has a guard consensus weight within a factor of 2 of water level:
(a) 2 * 2^-Wggr_(N+1) * BW_(N+1) > 2^-Wggr_i * BW_i >= 2^-Wggr_(N+1) * BW_(N+1) forall i in [1, N]
Each low-bandwidth guard is under the water level: (unchanged)
(b) Wggr_i == 0 forall i in [N+1, K]
The total guard weight is within a ratio of the desired guard weight:
(c) 2 * Wgg * G > sum_{i=1}^{K} 2^-Wggr_i * BW_i >= Wgg * G
The upper bound on (c) is actually a worst-case for a single-relay network. We could get closer to Wgg * G by finding the lowest Wggr for the largest relay that satisfies:
(c.1) sum_{i=1}^{K} 2^-Wggr_i * BW_i >= Wgg * G
This might make the largest relay an exception to (a). I'm not sure if we'll need this optimisation or not.
Wow, so first of all, thanks for suggesting new ways of resolving issues. I really appreciate your consideration for this work.
This suggestion indeed reduces the diff problem, and I find it elegant. But, I have the following concerns:
- The upper bound in (a) is huge, and would be appreciated for an adversary running relays. The adversary could manage to set relays with almost 2 times the consensus weight of the water level, and still being used at 100% in the entry position. This would reduce a lot the benefits of this proposal, right?. - The analysis in our paper works for equations we showed. Even if these ones are very similar, I am not sure we can say that it would be good without re-doing the research.
With your explanations below (weight change on clients), and given that the consensus diff size is a thing, I am leaning to believe that the weight calculation should be done on clients. Anyway, I have added a remark about this possibility within the proposal.
Why not calculate wmg on clients? Why not calculate both wgg and wmg on clients?
This is again a very good question: for such a critical feature (path selection), it is important that the directory auths have full power over the weight computation. If it happens that some change are needed, then the Tor network is updated in a straightforward way. This is not the case if those weights are computed in client code. In fact, I believe that one of the strength of this proposal is the oriented design towards the dirauths.
Consensus changes on dirauths:
- need to use consensus methods
- require a 6-12 month deployment time
- increases the size of all directory documents
- change the whole network
- if designed carefully, can be turned off using a network-wide parameter
Weight changes on clients:
- require a 3-6 month deployment time
- can be deployed to small numbers of clients
- if designed carefully, can be turned off using a network-wide parameter
I don't know which we'll prefer if we deploy this.
I have added a subsection in the proposal with those lines. Looks like we have to agree on this topic first before going in details in the implementation section?
You skipped this, but it's really important for the implementation:
If you must keep them in the consensus, please put these on the existing "w" lines, which are intended for this purpose:
"Other weighting keywords may be added later. Clients MUST ignore keywords they do not recognize."
https://gitweb.torproject.org/torspec.git/tree/dir-spec.txt#n2234
Please revise the proposal to conform to the existing directory spec.
OK, I have modified the example and opened discussion for other directions within the implementation (client side and your suggestion above).
Unanswered questions:
What about the feedback loop between this new allocation system and the bandwidth authorities?
I am sorry, I don't really understand why a feedback loop is needed. Measuring bandwidth and producing bandwidth-weights seems orthogonal to me.
You do not need to add a feedback loop, one already exists:
- Consensus weights on guards and middles change
- Client use of guards and middles change
- Bandwidth authority measurements of guards and middles change
- Repeat from 1
My question is:
How does this existing feedback loop affect your proposal? Does it increase or reduce the size of the guard and middle weight changes?
I have added those questions to the proposal. This looks difficult to know.
Should bandwidth authority clients use the new system?
How do we report on the new system? How can we determine whether it is better for security in practice? How can we determine if it is faster or slower in practice? How can we work out if someone is trying to game the system?
I've added a section within the proposal for all upcoming questions.
Thanks.
If you skip sections with unanswered questions or issues that need to be fixed, please put them in this section as well. Otherwise, it's easy to lose track of them.
I should have added everything now. Thanks!
Best, Florentin
Can you store the proposal in a git repository somewhere, so people can see what you've changed in each revision? Otherwise, they have to re-read the whole proposal each time.
T
-- Tim / teor
PGP C855 6CED 5D90 A0C5 29F6 4D43 450C BA7F 968F 094B ricochet:ekmygaiu4rzgsk6n
tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev