Hello!
George and I, along with the other participants of this hidden services meeting, have written a proposal for the idea to merge hidden service directories and introduction points into the same entity along with proposal 224.
Comments are encouraged, especially if there are downsides or side effects that we haven’t written about yet, or that you have a different opinion on. The intent is that we can decide to do this before implementing proposal 224, so they can be implemented together.
The proposal is attached, and also available from:
https://raw.githubusercontent.com/special/torspec/224-no-hsdir/proposals/ide...
Thanks!
- John
On Sun, Jul 12, 2015 at 05:48:12PM -0400, John Brooks wrote:
Filename: xxx-merge-hsdir-and-intro.txt Title: Merging Hidden Service Directories and Introduction Points Author: John Brooks, George Kadianakis Created: 2015-07-12
Thanks! I have added it as proposal 246.
--Roger
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
arma - isn't prop 246 already taken?
Filename: 246-hs-guard-discovery.txt Title: Defending Against Guard Discovery Attacks using Vanguards Author: George Kadianakis Created: 2015-07-10 Status: Draft
On 7/13/2015 1:12 AM, Roger Dingledine wrote:
On Sun, Jul 12, 2015 at 05:48:12PM -0400, John Brooks wrote:
Filename: xxx-merge-hsdir-and-intro.txt Title: Merging Hidden Service Directories and Introduction Points Author: John Brooks, George Kadianakis Created: 2015-07-12
Thanks! I have added it as proposal 246.
--Roger
_______________________________________________ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
On 13 Jul 2015, at 07:48 , John Brooks john.brooks@dereferenced.net wrote:
Hello!
George and I, along with the other participants of this hidden services meeting, have written a proposal for the idea to merge hidden service directories and introduction points into the same entity along with proposal 224.
Comments are encouraged, especially if there are downsides or side effects that we haven’t written about yet, or that you have a different opinion on. The intent is that we can decide to do this before implementing proposal 224, so they can be implemented together.
4.2. Restriction on the number of intro points and impact on load balancing
One drawback of this proposal is that the number of introduction points of a hidden service is now a constant global parameter. Hence, a hidden service can no longer adjust how many introduction points it uses, or select the nodes that will serve as its introduction points.
If we decide we want to allow popular hidden services to have more than 6 introduction points, we could use some of the 4 extra bits in the address to encode an introduction point count multiplier.
For example, if we used 2 bits: 00 - 6 introduction points (the default) 01 - 12 introduction points 10 - 18 introduction points (or 24 if we use a geometric progression) 11 - 24 introduction points (or 48 if we use a geometric progression)
There is a tradeoff in choosing the number of bits: The fewer alternatives we provide, the larger each anonymity class, and the harder it is to identify hidden services simply by counting their IPs. But having only a few alternatives also reduces HS flexibility in response to load.
I can see advantages in popular, high-availability, or politically unpopular hidden services having more introduction points. It would make it harder to overload all the introduction points, particularly once the server can't change introduction points when they go down.
Perhaps we only need two introduction point counts: 6 and 6n, where n is initially chosen based on the most popular hidden services, and is a consensus parameter, so can be updated if needed.
Tim
Tim Wilson-Brown (teor)
teor2345 at gmail dot com pgp ABFED1AC https://gist.github.com/teor2345/d033b8ce0a99adbc89c5
teor at blah dot im OTR D5BE4EC2 255D7585 F3874930 DB130265 7C9EBBC7
teor teor2345@gmail.com wrote:
On 13 Jul 2015, at 07:48 , John Brooks john.brooks@dereferenced.net wrote:
4.2. Restriction on the number of intro points and impact on load balancing
One drawback of this proposal is that the number of introduction points of a hidden service is now a constant global parameter. Hence, a hidden service can no longer adjust how many introduction points it uses, or select the nodes that will serve as its introduction points.
If we decide we want to allow popular hidden services to have more than 6 introduction points, we could use some of the 4 extra bits in the address to encode an introduction point count multiplier.
Interesting approach. This value would be difficult to change on an existing service, because it would have to hand out a new subtly-different URL to all clients. We had also briefly discussed using the extra 4 bits as a checksum on the address.
I can see advantages in popular, high-availability, or politically unpopular hidden services having more introduction points. It would make it harder to overload all the introduction points, particularly once the server can't change introduction points when they go down.
I _think_ that 6 introduction points can handle a lot of traffic, and I think it’s mostly sufficient for reliability and abuse-resistance. Personally, I’d like to see statistics showing that introduction points are a bottleneck before we design systems to increase the number of them. I suspect that guards fall down long before introduction points do. There are tradeoffs to using more, both in terms of network load and privacy (e.g. exposing popularity to relays).
Perhaps we only need two introduction point counts: 6 and 6n, where n is initially chosen based on the most popular hidden services, and is a consensus parameter, so can be updated if needed.
The introduction point count should be a consensus parameter; 224 was going to do this for HSDirs already.
Tim
Tim Wilson-Brown (teor)
- John
On Sun, Jul 12, 2015 at 4:48 PM, John Brooks john.brooks@dereferenced.net wrote:
Comments are encouraged, especially if there are downsides or side effects that we haven’t written about yet, or that you have a different opinion on. The intent is that we can decide to do this before implementing proposal 224, so they can be implemented together.
So an IP can do some things attack-wise that an HSDir cannot: - Availability monitoring (useful for intersection or confirmation) - Some side-channel linking attacks like latency and relay-clogging - ... other things? I feel like there could be more...
This proposal doubles the default number of IPs and reduces the "cost" of being an IP since the probability of being selected is no longer bandwidth-weighted. Is this a fair tradeoff for the performance improvement?
This proposal doubles the default number of IPs and reduces the "cost" of being an IP since the probability of being selected is no longer bandwidth-weighted. Is this a fair tradeoff for the performance improvement?
That seems easy to fix. Make the number of Introduction Points the same as it was before and make them be selected in a bandwidth-weight way. There is no cost to this. You need IPs to be online, and so whatever number was used in the past will yield the same availability now. And bandwidth-weighting should actually improve both performance and security.
Aaron
Hi Aaron,
On Fri, Jul 17, 2015 at 4:54 PM, A. Johnson aaron.m.johnson@nrl.navy.mil wrote:
This proposal doubles the default number of IPs and reduces the "cost" of being an IP since the probability of being selected is no longer bandwidth-weighted. Is this a fair tradeoff for the performance improvement?
That seems easy to fix. Make the number of Introduction Points the same as it was before and make them be selected in a bandwidth-weight way. There is no cost to this. You need IPs to be online, and so whatever number was used in the past will yield the same availability now. And bandwidth-weighting should actually improve both performance and security.
Is it obvious how to build a bandwidth-weighted DHT that is stable across changes in the consensus? One advantage of using the hash ring is that the loss of an HSDir causes only local changes in the topology, so if the client is using a different version of the consensus they can still locate one of the responsible HSDirs. (Note: I do not claim this cannot be done; it just seems like an important detail to sort out...)
That seems easy to fix. Make the number of Introduction Points the same as it was before and make them be selected in a bandwidth-weight way. There is no cost to this. You need IPs to be online, and so whatever number was used in the past will yield the same availability now. And bandwidth-weighting should actually improve both performance and security.
Is it obvious how to build a bandwidth-weighted DHT that is stable across changes in the consensus? One advantage of using the hash ring is that the loss of an HSDir causes only local changes in the topology, so if the client is using a different version of the consensus they can still locate one of the responsible HSDirs. (Note: I do not claim this cannot be done; it just seems like an important detail to sort out…)
You could reserve a space after each relay in the hash ring with a length equal to the relay's bandwidth, and then assign an onion service with a hash that is a fraction f of the maximum possible hash value to the relay owning the space in which the fraction f of the total hash-ring length is located. Removing or adding a relay adjusts the onion service locations by an amount that is at most the fraction that is that relay’s total bandwidth fraction. To ensure coverage for clients with older consensuses, the relay can maintain HSDir+IPs at the locations indicated by both the current and previous consensus.
Aaron
Aaron Johnson transcribed 2.1K bytes:
That seems easy to fix. Make the number of Introduction Points the same as it was before and make them be selected in a bandwidth-weight way. There is no cost to this. You need IPs to be online, and so whatever number was used in the past will yield the same availability now. And bandwidth-weighting should actually improve both performance and security.
Is it obvious how to build a bandwidth-weighted DHT that is stable across changes in the consensus? One advantage of using the hash ring is that the loss of an HSDir causes only local changes in the topology, so if the client is using a different version of the consensus they can still locate one of the responsible HSDirs. (Note: I do not claim this cannot be done; it just seems like an important detail to sort out…)
You could reserve a space after each relay in the hash ring with a length equal to the relay's bandwidth, and then assign an onion service with a hash that is a fraction f of the maximum possible hash value to the relay owning the space in which the fraction f of the total hash-ring length is located. Removing or adding a relay adjusts the onion service locations by an amount that is at most the fraction that is that relay’s total bandwidth fraction. To ensure coverage for clients with older consensuses, the relay can maintain HSDir+IPs at the locations indicated by both the current and previous consensus.
At one point, I wanted to do something like this for BridgeDB's hashrings, to be able to increase the probability that Bridges with higher bandwidth would be distributed (assuming we live in a glorious future where Bridges are actually measured). The above design isn't the most time efficient, and it also turned out to be *super* unfun to implement/debug. For HS reliability, it could be a bit disastrous, depending on how much "shifting" happens between consensuses, and (at least, for BridgeDB's case) my testing showed that even a small differential meant that nearly the entire hashring would be in an unusable state.
A better algorithm would be a Consistent Hashring, modified to dynamically allocate replications in proportion to fraction of total bandwidth weight. As with a normal Consistent Hashring, replications determine the number times the relay is uniformly inserted into the hashring. The algorithm goes like this:
bw_total ← Σ BW, ∀ CONSENSUS ∃ DESC {BW: DESC → BW} router ← ⊥ replications ← ⊥ key ← ⊥ while router ∈ CONSENSUS: | replications ← FLOOR(CONSENSUS_WEIGHT_FRACTION(BW, bw_total) * T) | while replications != 0: | | key ← HMAC(CONCATENATE(FPR, replications))[:X] | | INSERT(key, router)
where:
* BW is the routers's `w bandwith=` weight line from the consensus, * DESC is a descriptor in the CONSENSUS, * CONSENSUS_WEIGHT_FRACTION is a function for computing a router's consensus weight in relation to the summation of consensus weights (bw_total), * T is some arbitrary number for translating a router's consensus weight fraction into the number of replications, * HMAC is a keyed hashing function, * FPR is an hexadecimal string containing the hash of the router's public identity key, * X is some arbitrary number of bytes to truncate an HMAC to, and * INSERT is an algorithm for inserting items into the hashring.
For routers A and B, where B has a little bit more bandwidth than A, this gets you a hashring which looks like this:
B-´¯¯`-BA A,` `. / \ B| |B \ / `. ,´A AB--__--´B
When B disappears, A remains in the same positions:
_-´¯¯`-_A A,` `. / \ | | \ / `. ,´A A`--__--´
And similarly if B disappears:
B-´¯¯`-B ,` `. / \ B| |B \ / `. ,´ B--__--´B
So no more "shifting" problem. It also makes recalculation of the hashring when a new consensus arrives much more time efficient.
If you want to play with it, I've implemented it in Python for BridgeDB:
https://gitweb.torproject.org/user/isis/bridgedb.git/tree/bridgedb/hashring....
One tiny caveat being that the ConsistentHashring class doesn't dynamically assign replication count by bandwidth weight (still waiting for that glorious future…); it gets initialised with the number of replications. However, nothing in the current implementation prevents you from doing:
h = ConsistentHashring('SuperSecureKey', replications=6) h.insert(A) h.replications = 23 h.insert(B) h.replications = 42 h.insert(C)
Best Regards,
On Thu, Jul 23, 2015 at 11:56 PM, isis isis@torproject.org wrote:
Aaron Johnson transcribed 2.1K bytes:
That seems easy to fix. Make the number of Introduction Points the same as it was before and make them be selected in a bandwidth-weight way. There is no cost to this. You need IPs to be online, and so whatever number was used in the past will yield the same availability now. And bandwidth-weighting should actually improve both performance and security.
Is it obvious how to build a bandwidth-weighted DHT that is stable across changes in the consensus? One advantage of using the hash ring is that the loss of an HSDir causes only local changes in the topology, so if the client is using a different version of the consensus they can still locate one of the responsible HSDirs. (Note: I do not claim this cannot be done; it just seems like an important detail to sort out…)
You could reserve a space after each relay in the hash ring with a length equal to the relay's bandwidth, and then assign an onion service with a hash that is a fraction f of the maximum possible hash value to the relay owning the space in which the fraction f of the total hash-ring length is located. Removing or adding a relay adjusts the onion service locations by an amount that is at most the fraction that is that relay’s total bandwidth fraction. To ensure coverage for clients with older consensuses, the relay can maintain HSDir+IPs at the locations indicated by both the current and previous consensus.
At one point, I wanted to do something like this for BridgeDB's hashrings, to be able to increase the probability that Bridges with higher bandwidth would be distributed (assuming we live in a glorious future where Bridges are actually measured). The above design isn't the most time efficient, and it also turned out to be *super* unfun to implement/debug. For HS reliability, it could be a bit disastrous, depending on how much "shifting" happens between consensuses, and (at least, for BridgeDB's case) my testing showed that even a small differential meant that nearly the entire hashring would be in an unusable state.
FWIW, I was running a simulation of this algorithm with the first week of July's consensuses when Isis posted the following way smarter algorithm:
A better algorithm would be a Consistent Hashring, modified to dynamically allocate replications in proportion to fraction of total bandwidth weight. As with a normal Consistent Hashring, replications determine the number times the relay is uniformly inserted into the hashring.
So I simulated this one also (with one exception: I didn't scale the number of replications by the total bandwidth. Instead, each HSDir simply gets a number of ring locations proportional to its measured bandwidth. The scaling should happen automatically.) The simulation works as follows: pick 10000 hash ring locations, and for each location, track how many distinct relays would be responsible for that location for one calendar day. The simulation was run for 7/1/15 through 7/7/15. With Aaron's algorithm, the average hash ring location mapped to 9.96 distinct relays each day; with Isis' consistent hash ring approach, the average location mapped to 1.41 distinct relays each day.
FWIW, I was running a simulation of this algorithm with the first week of July's consensuses when Isis posted the following way smarter algorithm:
A better algorithm would be a Consistent Hashring, modified to dynamically allocate replications in proportion to fraction of total bandwidth weight. As with a normal Consistent Hashring, replications determine the number times the relay is uniformly inserted into the hashring.
So I simulated this one also (with one exception: I didn't scale the number of replications by the total bandwidth… With Aaron's algorithm, the average hash ring location mapped to 9.96 distinct relays each day; with Isis' consistent hash ring approach, the average location mapped to 1.41 distinct relays each day.
Excellent stuff, Isis and Nick! I agree that Isis’s algorithm is superior in that in reduces the number of times an onion service is forced to republish its descriptors because its directories have changed.
Cheers, Aaron
A. Johnson aaron.m.johnson@nrl.navy.mil wrote:
This proposal doubles the default number of IPs and reduces the “cost" of being an IP since the probability of being selected is no longer bandwidth-weighted. Is this a fair tradeoff for the performance improvement?
That seems easy to fix. Make the number of Introduction Points the same as it was before and make them be selected in a bandwidth-weight way. There is no cost to this. You need IPs to be online, and so whatever number was used in the past will yield the same availability now. And bandwidth-weighting should actually improve both performance and security.
I think bandwidth weight isn't appropriate for this. If we think the cost of running a HSDir(+IP) is too low, we should increase that directly. This is a good case where we can benefit from the many honest-but-not-well-funded relays. Concentrating even more traffic and information onto the highest-bandwidth relays isn’t an improvement.
- John
Aaron
tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
On Jul 20, 2015, at 4:03 PM, John Brooks john.brooks@dereferenced.net wrote:
A. Johnson aaron.m.johnson@nrl.navy.mil wrote:
This proposal doubles the default number of IPs and reduces the “cost" of being an IP since the probability of being selected is no longer bandwidth-weighted. Is this a fair tradeoff for the performance improvement?
That seems easy to fix. Make the number of Introduction Points the same as it was before and make them be selected in a bandwidth-weight way. There is no cost to this. You need IPs to be online, and so whatever number was used in the past will yield the same availability now. And bandwidth-weighting should actually improve both performance and security.
I think bandwidth weight isn't appropriate for this. If we think the cost of running a HSDir(+IP) is too low, we should increase that directly. This is a good case where we can benefit from the many honest-but-not-well-funded relays. Concentrating even more traffic and information onto the highest-bandwidth relays isn’t an improvement.
The security problem is that is it cheaper to obtain an extra IP than it is to buy the commensurate fraction (viz. 1/7000) of Tor's bandwidth. Dividing HSDir+IP duties by relay makes it cheaper to observe a given fraction of client activity than dividing it by bandwidth would. Consider, for example, the LizardNSA botnet attack on Tor, in which thousands of low-bandwidth relays were added. If they had been at all surreptitious, they could have easily flooded the HSDir ring.
The performance problem of even division is that HSDir+IPs will perform a lot of actions, and relays with low bandwidth or CPU will not be able to handle as much of that activity as larger relays.
The uniform division of the hash ring has always seemed like an incorrect design choice, and it is one that we have an opportunity to fix.
Best, Aaron
Nicholas Hopper hopper@cs.umn.edu wrote:
On Sun, Jul 12, 2015 at 4:48 PM, John Brooks john.brooks@dereferenced.net wrote:
Comments are encouraged, especially if there are downsides or side effects that we haven’t written about yet, or that you have a different opinion on. The intent is that we can decide to do this before implementing proposal 224, so they can be implemented together.
So an IP can do some things attack-wise that an HSDir cannot:
- Availability monitoring (useful for intersection or confirmation)
- Some side-channel linking attacks like latency and relay-clogging
- ... other things? I feel like there could be more…
Fair points. We need to think carefully about this, but at a glance it doesn’t concern me very much: both of these capabilities are also available to clients. If the IP+HSDir can identify the service (knows the unblinded public key), it could do the same attacks as a client. This may be more relevant for some client-authorized services.
This proposal also makes it more difficult to get your IP chosen for a target service, so it could be an improvement against this attacker.
This proposal doubles the default number of IPs and reduces the “cost" of being an IP since the probability of being selected is no longer bandwidth-weighted. Is this a fair tradeoff for the performance improvement?
Viewed from the other direction, this proposal keeps the cost and attacker probabilities of being HSDir the same, and eliminates the risks from selecting additional relays as introduction points. It’s a win against an adversary with a malicious relay.
I think it's a security improvement _and_ a performance improvement.
- John
On 12/07/15 22:48, John Brooks wrote:
1.3. Other effects on proposal 224
An adversarial introduction point is not significantly more capable than a hidden service directory under proposal 224. The differences are:
1. The introduction point maintains a long-lived circuit with the service 2. The introduction point can break that circuit and cause the service to rebuild it
Regarding this second difference: the introduction point (cooperating with a corrupt middle node) could potentially try to discover the service's guard by repeatedly breaking the circuit until it was rebuilt through the corrupt middle node. Would it make sense to use vanguards here, as well as on rendezvous circuits?
Cheers, Michael
Michael Rogers michael@briarproject.org writes:
On 12/07/15 22:48, John Brooks wrote:
1.3. Other effects on proposal 224
An adversarial introduction point is not significantly more capable than a hidden service directory under proposal 224. The differences are:
1. The introduction point maintains a long-lived circuit with the service 2. The introduction point can break that circuit and cause the service to rebuild it
Regarding this second difference: the introduction point (cooperating with a corrupt middle node) could potentially try to discover the service's guard by repeatedly breaking the circuit until it was rebuilt through the corrupt middle node. Would it make sense to use vanguards here, as well as on rendezvous circuits?
Hello,
currently we address this intro point guard discovery attack by having hidden services retry only 3 times. After those 3 times, they ditch that intro point and pick another one.
That said proposal 247 suggests that hidden services use vanguards for both rendezvous and introduction point circuits anyway.
Take care!
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
Hi,
Worth mentioning, after #15745 we rotate the introduction points after between 16384 and 32768 (random) introductions and/or a lifetime of 18 to 24 hours (random).
If we merge introduction points with HSDirs, we have no option but to use the same introduction points, regardless how many INTRODUCE2 cells we get through them, until the new shared-RNG consensus value (24 hours normally, in case nothing bad happens which makes us failback to disaster protocol for shared-RNG where we use the previous known one). So if we adopt this, the IPs will have a fixed lifetime of 24 hours, no less or no more (unless disaster).
Switching IPs randomly is not something we don't care about. They see exactly how popular a hidden service is, and they are always on a live circuit with the hidden service directly. The HSDirs in our current design allow us to be less paranoid about them, since they just hold descriptors, we don't keep live circuits with them, and the worse they can do is perform a DoS by not serving our descriptors - this will be mitigated with consensus shared-RNG anyway.
This will leave #15714 with all its child tickets without subject to bind to.
We could complicate things a little bit, and always select using the consensus shared-RNG 2 sets of 6 HSDirs+IPs each. Initially we start with the 6 relays in the first set as IPs, and after between 16384 and 32768 (random value) INTRODUCE2 cells received we close the circuit to one IP, mark it as dirty and open another circuit with one from the second set. However, this will be painful for the clients who will also select and try introduction points randomly and will get failures. Considering we now retry IPs and not give up after first failed attempt, this could complicate it even further.
To be honest I like the idea of merging HSDir with introduction point functionality in next generation hidden services, but it requires more code, more effort, more pain regarding legacy compatibility (which will be a problem). It has some unquestionable, clear benefits which I totally agree to, but I am not fully convinced we make a good deal here if we think about the tradeoffs.
Also worth mentioning, said it on other threads but saying it here since it's on topic: the OnionBalance testings I have performed demonstrated in practice, in the real network, that 6 introduction points for a hidden service should be just fine - the Guard will become a bottleneck long before the introduction points. I had on average about 2000 concurrent rendezvous circuits in any time of the day, this means an enormous amount of rendezvous circuits per 24 hours just with 2 introduction points.
On 8/20/2015 8:23 PM, George Kadianakis wrote:
Michael Rogers michael@briarproject.org writes:
On 12/07/15 22:48, John Brooks wrote:
1.3. Other effects on proposal 224
An adversarial introduction point is not significantly more capable than a hidden service directory under proposal 224. The differences are:
- The introduction point maintains a long-lived circuit with
the service 2. The introduction point can break that circuit and cause the service to rebuild it
Regarding this second difference: the introduction point (cooperating with a corrupt middle node) could potentially try to discover the service's guard by repeatedly breaking the circuit until it was rebuilt through the corrupt middle node. Would it make sense to use vanguards here, as well as on rendezvous circuits?
Hello,
currently we address this intro point guard discovery attack by having hidden services retry only 3 times. After those 3 times, they ditch that intro point and pick another one.
That said proposal 247 suggests that hidden services use vanguards for both rendezvous and introduction point circuits anyway.
Take care!
On 21 Aug 2015, at 04:36, s7r s7r@sky-ip.org wrote:
If we merge introduction points with HSDirs, we have no option but to use the same introduction points, regardless how many INTRODUCE2 cells we get through them, until the new shared-RNG consensus value (24 hours normally, in case nothing bad happens which makes us failback to disaster protocol for shared-RNG where we use the previous known one). So if we adopt this, the IPs will have a fixed lifetime of 24 hours, no less or no more (unless disaster).
On protocol failure, the latest edition of the shared-random proposal has the authorities generate a different, predictable value every 24 hours, based on the most recent successful shared-random value.
This is a mitigation which requires an adversary to occupy new points on the hash ring each day, even in a disaster scenario where those points are predictable slightly further in advance.
Tim
Tim Wilson-Brown (teor)
teor2345 at gmail dot com pgp 0xABFED1AC https://gist.github.com/teor2345/d033b8ce0a99adbc89c5
teor at blah dot im OTR D5BE4EC2 255D7585 F3874930 DB130265 7C9EBBC7
Op 12/07/15 om 23:48 schreef John Brooks:
Hello!
George and I, along with the other participants of this hidden services meeting, have written a proposal for the idea to merge hidden service directories and introduction points into the same entity along with proposal 224.
Comments are encouraged, especially if there are downsides or side effects that we haven’t written about yet, or that you have a different opinion on. The intent is that we can decide to do this before implementing proposal 224, so they can be implemented together.
The proposal is attached, and also available from:
https://raw.githubusercontent.com/special/torspec/224-no-hsdir/proposals/ide...
Thanks!
- John
Earlier today in Berlin we discussed this proposal a bit and we noticed a small hack we could do here to make #246 backwards compatible: instead of changing the descriptor format, we could choose to publish an only slightly different descriptor to the nodes in the hsdir ring, including only the introduction point that is the node in the hsdir which the descriptor gets published to. That way, the client will already have the circuit to the right node, and could just re-use it.
Example: imagine 2 HSdirs, one at 1.2.3.4, and one at 5.6.7.8. The client will talk to one of these HSdirs to fetch the descriptor, and it will find that if it connects to 5.6.7.8 it gets only a single introduction point (that being 5.6.7.8), and if it had asked the directory on 1.2.3.4 it would have been told to introduce itself at 1.2.3.4. The client can then re-use the circuit to do the introduction.
The speedup may only be noticeable to nodes that actually re-use the circuit, but all clients will at least keep working.
Tom