Hello relay operators!
I am planning on performing an experiment on the Tor network to try to gauge the accuracy of the advertised bandwidths that relays report in their server descriptors. Briefly, the experiment involves running a speed test on every relay for a short time (about 20 seconds). Details follow.
I plan to run the experiment in about 1 week. Relay operators can opt-out of the speed test by replying on this thread, and we will remove you from the list of relays to scan.
Peace, love, and positivity, Rob
--- Measuring the Accuracy of Tor Relays' Advertised Bandwidths
Motivation ---------- The capacity of Tor relays (maximum available goodput) is an important metric. Combined with mean goodput, it allows us to compute the bandwidth utilization of individual relays as well as the entire network in aggregate. Generally, capacity is used to help balance client load across relays, and relay utilization rates help Tor make informed decisions about how to allocate resources and prioritize performance and scalability improvements.
Problem ------- Currently, Tor uses a heuristic measure of unknown accuracy to estimate Tor relay capacity. Each relay keeps track of the maximum goodput it has achieved over any 10 second window in a 24 hour period. This is called the "observed bandwidth". Relays take the minimum of their "observed bandwidth" and their bandwidth rate-limiting configuration and reports the result as the "advertised bandwidth" in their server descriptors. We do not know how well the advertised bandwidth estimates the true relay capacity, but we do know that it represents a lower bound on capacity.
Hypothesis ---------- The advertised bandwidth significantly underestimates the true capacity of Tor relays. On average, relays with higher true capacities will be more strongly correlated with capacity underestimation (because it will be less likely that fast relays will have sustained their full capacity over a 10 second period).
Experiment ---------- A relay reports its advertised bandwidth in its server descriptor. To test how well these reported numbers represent the true capacity of a relay, we can manually perform a speed test on the relay by initiating the simultaneous download of several large data streams for a period that exceeds 10 seconds. In the report following our test, the relay will report its advertised bandwidth in its server descriptor and the results will be collected and reported by metrics.torproject.org.
The experiment involves two steps: running the speed test on a relay under our control, and running the speed test on all relays in Tor network.
We will first run the speed test on at least one relay that we control, in order to test that the method is effective and that we can in fact observe a change in the advertised bandwidth reported on metrics.torproject.org. Once we have confidence that our speed test is functioning correctly, and that the metrics pipeline will allow us to gather the results, we will repeat it on all relays in the network.
We will conduct the speed tests while minimizing network overhead. We will use a custom client that builds 2-relay circuits. The first relay will be the target relay we are speed testing, and the second relay will be a fast exit relay that we control. We will initiate data streams between a speedtest client and server running on the same machine as our exit relay.
The setup will look like:
speedtest-client <--> tor-client <--> target-relay <--> exit-relay <--> speedtest-server
All components will run on the same machine that we control except for the target-relay, which will rotate as we test different relays in the network. For each target relay, we plan to run the speedtest for 20 seconds in order to increase the probability that the 10 second mean goodput will reach the true capacity. We will measure each relay over a few days to ensure that our speedtest effects are reported by every relay.
Although we believe that the overhead of this speed test is in line with regular usage, relay operators can opt-out of the speed test by replying on this thread. Those that opt out will be removed from our list of relays to scan.
Analysis -------- Following our speedtest, we will analyze the data collected and reported by Tor metrics. We will compared the advertised bandwidth that each relay reports before our experiment to those reported during our experiment. This will help us test our hypothesis that relays' advertised bandwidth underestimates the true capacity of relays. We will run a statistical correlation analysis on the data to test the strength of the correlation between the previously reported (estimated) relay capacity and relay capacity underestimation. We will report our results to the Tor community.
We expect that the results of our experiment will help Tor decide how to allocate resources and will help them plan and prioritize performance improvements. It will also provide insight into the operation of the current load balancing system, which uses advertised bandwidth to produce consensus weights.
On Fri, Jul 26, 2019 at 10:18:24AM -0400, Rob Jansen wrote:
I am planning on performing an experiment on the Tor network to try to gauge the accuracy of the advertised bandwidths that relays report in their server descriptors. Briefly, the experiment involves running a speed test on every relay for a short time (about 20 seconds).
Thanks Rob!
For context, I asked Rob to do this experiment, because we know that the current bandwidth authority design is mis-measuring relays, but we don't know how wrong things are. Giving every relay a short burst of load should give us some insight into how much traffic that relay can handle, which will in turn tell us how much room for improvement there is in our bandwidth estimation.
And as a bonus, for this one time, fast relays should actually be consistently seen as fast, and the Tor network should be better balanced and the user experience should be better. If we like how it works, our follow-up task will be to change things so we get this result all the time. :)
Woo, --Roger
Thank goodness something is being done to hopefully resolve some of the issues with unutilized bandwidth that people keep talking about constantly.
I get having to change things due to abuse and misconfigurations with the tor network using observed bandwidth and some bandwidth testing to confirm/verify available bandwidth versus just using whatever 'ol configuration value is set.
But it's definitely kind of slowed nodes down in general :(
Matt Westfall President & CIO ECAN Solutions, Inc. Everything Computers and Networks 804.592.1672
------ Original Message ------ From: "Roger Dingledine" arma@torproject.org To: tor-relays@lists.torproject.org Sent: 7/26/2019 10:35:29 AM Subject: Re: [tor-relays] Measuring the Accuracy of Tor Relays' Advertised Bandwidths
On Fri, Jul 26, 2019 at 10:18:24AM -0400, Rob Jansen wrote:
I am planning on performing an experiment on the Tor network to try to gauge the accuracy of the advertised bandwidths that relays report in their server descriptors. Briefly, the experiment involves running a speed test on every relay for a short time (about 20 seconds).
Thanks Rob!
For context, I asked Rob to do this experiment, because we know that the current bandwidth authority design is mis-measuring relays, but we don't know how wrong things are. Giving every relay a short burst of load should give us some insight into how much traffic that relay can handle, which will in turn tell us how much room for improvement there is in our bandwidth estimation.
And as a bonus, for this one time, fast relays should actually be consistently seen as fast, and the Tor network should be better balanced and the user experience should be better. If we like how it works, our follow-up task will be to change things so we get this result all the time. :)
Woo, --Roger
tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Hi!
Good to hear that you guys try to solve the problem of slow measured relays. For example when i measure my relay
40108FDFA40EDB013F7291F3B4DA3D412ED3A5EF
with the speedtest from tele2 i get about 90 MiB download and about 50 MiB upload but Tor measures it with about 15 MiB. Some of my relays are measured very accurate but other ones are measured with only about 1/5 of what my results are.
I read the sbws documentation about how the measuring process is working and i am curious about how the experiment is measuring relays.
if possible please publish a little more info about the experiment or at least the results somewhere. Thanks
Am Fr., 26. Juli 2019 um 16:35 Uhr schrieb Roger Dingledine < arma@torproject.org>:
On Fri, Jul 26, 2019 at 10:18:24AM -0400, Rob Jansen wrote:
I am planning on performing an experiment on the Tor network to try to
gauge the accuracy of the advertised bandwidths that relays report in their server descriptors. Briefly, the experiment involves running a speed test on every relay for a short time (about 20 seconds).
Thanks Rob!
For context, I asked Rob to do this experiment, because we know that the current bandwidth authority design is mis-measuring relays, but we don't know how wrong things are. Giving every relay a short burst of load should give us some insight into how much traffic that relay can handle, which will in turn tell us how much room for improvement there is in our bandwidth estimation.
And as a bonus, for this one time, fast relays should actually be consistently seen as fast, and the Tor network should be better balanced and the user experience should be better. If we like how it works, our follow-up task will be to change things so we get this result all the time. :)
Woo, --Roger
tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
On Jul 30, 2019, at 2:02 PM, Michael Gerstacker michael.gerstacker@googlemail.com wrote:
Hi!
Good to hear that you guys try to solve the problem of slow measured relays. For example when i measure my relay
40108FDFA40EDB013F7291F3B4DA3D412ED3A5EF
with the speedtest from tele2 i get about 90 MiB download and about 50 MiB upload but Tor measures it with about 15 MiB. Some of my relays are measured very accurate but other ones are measured with only about 1/5 of what my results are.
Cool, I hope my experiment yields good results for your relay.
I read the sbws documentation about how the measuring process is working and i am curious about how the experiment is measuring relays.
if possible please publish a little more info about the experiment or at least the results somewhere. Thanks
Note that I am not using sbws for this experiment, but rather a custom measurement process. The plan is to use multiple Tor clients to create multiple sockets to the target relay, and then each client will extend a circuit through the target and then back to one of a set of relays running on the same machine as the client. I'm hoping the use of multiple sockets will help mitigate the effects of packet loss.
The results will be published when possible, after they have been analyzed and understood.
Peace, love, and positivity, Rob
On Jul 26, 2019, at 10:35 AM, Roger Dingledine arma@torproject.org wrote:
On Fri, Jul 26, 2019 at 10:18:24AM -0400, Rob Jansen wrote:
I am planning on performing an experiment on the Tor network to try to gauge the accuracy of the advertised bandwidths that relays report in their server descriptors. Briefly, the experiment involves running a speed test on every relay for a short time (about 20 seconds).
Thanks Rob!
For context, I asked Rob to do this experiment, because we know that the current bandwidth authority design is mis-measuring relays, but we don't know how wrong things are. Giving every relay a short burst of load should give us some insight into how much traffic that relay can handle, which will in turn tell us how much room for improvement there is in our bandwidth estimation.
And as a bonus, for this one time, fast relays should actually be consistently seen as fast, and the Tor network should be better balanced and the user experience should be better. If we like how it works, our follow-up task will be to change things so we get this result all the time. :)
Over the last 2 days I tested my speedtest on 4 test relays and verified that it does in fact increase relays' advertised bandwidth on Tor metrics.
Today, I started running the speedtest on all relays in the network. So far, I have finished about 100 relays (and counting). I expect that the advertised bandwidths reported by metrics will increase over the next few days. For this to happen, the bandwidth histories observed by a relay during my speedtest are first committed to the bandwidth history table (within 24 hours), and then reported in the server descriptors (within 18-36 hours, depending on when the bandwidth history commit happens).
Peace, love, and positivity, Rob
On Tue, Aug 06, 2019 at 05:31:39PM -0400, Rob Jansen wrote:
Today, I started running the speedtest on all relays in the network. So far, I have finished about 100 relays (and counting). I expect that the advertised bandwidths reported by metrics will increase over the next few days. For this to happen, the bandwidth histories observed by a relay during my speedtest are first committed to the bandwidth history table (within 24 hours), and then reported in the server descriptors (within 18-36 hours, depending on when the bandwidth history commit happens).
Great.
There will be another confusing (confounding) factor, which is that the weights in the consensus are chosen by the bandwidth authorities, so even if the relay's self-reported bandwidth goes up (because it now sees that it can handle more traffic), that doesn't mean that the consensus weight will necessarily go up. In theory it ought to, but with a day or so delay, as the bwauths catch on to the larger value in the descriptor; but in practice, I am not willing to make bets on whether it will behave as intended. :) So, call it another thing to keep an eye out for during the experiment.
Woo, --Roger
On 8/6/19, Roger Dingledine arma@torproject.org wrote:
On Tue, Aug 06, 2019 at 05:31:39PM -0400, Rob Jansen wrote:
Today, I started running the speedtest on all relays in the network.
There will be another confusing (confounding) factor, which is that the ... as intended. :) So, call it another thing to keep an eye out for during the experiment.
Someone here posted they were testing with sub-minute durations... tens of seconds. That's unlikely enough to allow TCP to adjust over everything in the circuit to really measure "bandwidth". And is instead likely to be measuring something between "setup latency" and that, with an uncharacterized ramp in the middle.
You probably want to be scatterplotting a bunch of different things and durations on metrics.tpo.
And isolating out path nodes and things from whichever it is you're trying to measure. Introducing known inputs. Etc.
On 8/6/19 7:05 PM, grarpamp wrote:
On 8/6/19, Roger Dingledine arma@torproject.org wrote:
On Tue, Aug 06, 2019 at 05:31:39PM -0400, Rob Jansen wrote:
Today, I started running the speedtest on all relays in the network.
There will be another confusing (confounding) factor, which is that the ... as intended. :) So, call it another thing to keep an eye out for during the experiment.
Someone here posted they were testing with sub-minute durations... tens of seconds. That's unlikely enough to allow TCP to adjust over everything in the circuit to really measure "bandwidth". And is instead likely to be measuring something between "setup latency" and that, with an uncharacterized ramp in the middle.
- That person was Rob, the one who just said they've started their measurements. Rob's original announcement is here[0].
- We've been looking into stuff like this for the last year and have some promising results from sub-minute-duration measurements with measurement hosts spread around the world. This is despite suspected sources of inaccuracy such as TCP slow start and high bandwidth delay products.
- What Rob is doing isn't even trying to get an accurate measurement of a relay's capacity. It's solely to test the hypothesis that observed bandwidth is a poor estimate of capacity*. I refer again to [0] for the motivation, design, etc.
You probably want to be scatterplotting a bunch of different things and durations on metrics.tpo.
And isolating out path nodes and things from whichever it is you're trying to measure. Introducing known inputs. Etc.
Thanks for the input. I'm sure our analysis of the collected data will be ... thorough and exciting.
Matt
* You might argue that an artificial 20 second or even 2 minute burst of traffic is still bad estimate for long term sustained capacity. That may be a good argument. But I'd argue that it's strictly better than the current strategy: keep track of your biggest natural 10 second burst observed in the last 5 days.
[0]: https://lists.torproject.org/pipermail/tor-relays/2019-July/017535.html
On Aug 6, 2019, at 5:48 PM, Roger Dingledine arma@torproject.org wrote:
On Tue, Aug 06, 2019 at 05:31:39PM -0400, Rob Jansen wrote:
Today, I started running the speedtest on all relays in the network. So far, I have finished about 100 relays (and counting). I expect that the advertised bandwidths reported by metrics will increase over the next few days. For this to happen, the bandwidth histories observed by a relay during my speedtest are first committed to the bandwidth history table (within 24 hours), and then reported in the server descriptors (within 18-36 hours, depending on when the bandwidth history commit happens).
Great.
There will be another confusing (confounding) factor, which is that the weights in the consensus are chosen by the bandwidth authorities, so even if the relay's self-reported bandwidth goes up (because it now sees that it can handle more traffic), that doesn't mean that the consensus weight will necessarily go up. In theory it ought to, but with a day or so delay, as the bwauths catch on to the larger value in the descriptor; but in practice, I am not willing to make bets on whether it will behave as intended. :) So, call it another thing to keep an eye out for during the experiment.
Another wrinkle to keep in mind is that my script measures one relay at a time. If there are multiple relays running on the same NIC, after my measurement each of them will think they have the full capacity of the NIC. So if we just add up all of the advertised bandwidths after my measurement without considering that some of them share a NIC, that will result in an over-estimate of the available capacity of the network.
To avoid over-estimating network capacity, we could use IP-based heuristics to guess which relays share a machine (e.g., if they share an IP address, or have a nearby IP address). In the long term, it would be nice if Tor would collect and report some sort of machine ID the same way it reports the platform.
Wheeeee! Rob
Hi Rob,
On 8 Aug 2019, at 22:15, Rob Jansen rob.g.jansen@nrl.navy.mil wrote:
On Aug 6, 2019, at 5:48 PM, Roger Dingledine arma@torproject.org wrote:
On Tue, Aug 06, 2019 at 05:31:39PM -0400, Rob Jansen wrote:
Today, I started running the speedtest on all relays in the network. So far, I have finished about 100 relays (and counting). I expect that the advertised bandwidths reported by metrics will increase over the next few days. For this to happen, the bandwidth histories observed by a relay during my speedtest are first committed to the bandwidth history table (within 24 hours), and then reported in the server descriptors (within 18-36 hours, depending on when the bandwidth history commit happens).
Great.
There will be another confusing (confounding) factor, which is that the weights in the consensus are chosen by the bandwidth authorities, so even if the relay's self-reported bandwidth goes up (because it now sees that it can handle more traffic), that doesn't mean that the consensus weight will necessarily go up. In theory it ought to, but with a day or so delay, as the bwauths catch on to the larger value in the descriptor; but in practice, I am not willing to make bets on whether it will behave as intended. :) So, call it another thing to keep an eye out for during the experiment.
Another wrinkle to keep in mind is that my script measures one relay at a time. If there are multiple relays running on the same NIC, after my measurement each of them will think they have the full capacity of the NIC. So if we just add up all of the advertised bandwidths after my measurement without considering that some of them share a NIC, that will result in an over-estimate of the available capacity of the network.
To avoid over-estimating network capacity, we could use IP-based heuristics to guess which relays share a machine (e.g., if they share an IP address, or have a nearby IP address). In the long term, it would be nice if Tor would collect and report some sort of machine ID the same way it reports the platform.
More precisely, we're trying to answer the question: "Which small sets of machines are limited by a common network link or shared CPU?"
A machine ID is an incomplete answer to this question: it doesn't deal with VMs, or multiple machines that share a router.
Here are some other potential heuristics: * clock skew / precise time: machine/VM? * nearby IP addresses and common ASN: machine?/VM?/router? * platform: machine * tor version: operator? (a proxy for machine/VM/router)
Is there a cross-platform API for machine IDs? Or similar APIs for our most common relay platforms? (Linux, BSDs, Windows)
T
I think that is a bad idea. You don't know enough about a relay to have a clue about what the underlying hardware looks like from any of that metrics.
Simple example: You have a 8 core 16 threads cpu, run 4 instances, each node pinned to 2 threads and a 10 gig pipe, you will run each tor relay at max speed without effecting any of the other relays on the same server. But with your choosen metrics you would slow all of them down just in case. I don't even think that there are metrics from which you could guess that, the relay operator would have to set limits to do this effective, or if Tor would have proper multithread support you would just have to run one instance per server and you would be good to go with just the measurement you done from external.
On 08.08.2019 14:34, teor wrote:
Hi Rob,
On 8 Aug 2019, at 22:15, Rob Jansen rob.g.jansen@nrl.navy.mil wrote:
On Aug 6, 2019, at 5:48 PM, Roger Dingledine arma@torproject.org wrote:
On Tue, Aug 06, 2019 at 05:31:39PM -0400, Rob Jansen wrote:
Today, I started running the speedtest on all relays in the network. So far, I have finished about 100 relays (and counting). I expect that the advertised bandwidths reported by metrics will increase over the next few days. For this to happen, the bandwidth histories observed by a relay during my speedtest are first committed to the bandwidth history table (within 24 hours), and then reported in the server descriptors (within 18-36 hours, depending on when the bandwidth history commit happens).
Great.
There will be another confusing (confounding) factor, which is that the weights in the consensus are chosen by the bandwidth authorities, so even if the relay's self-reported bandwidth goes up (because it now sees that it can handle more traffic), that doesn't mean that the consensus weight will necessarily go up. In theory it ought to, but with a day or so delay, as the bwauths catch on to the larger value in the descriptor; but in practice, I am not willing to make bets on whether it will behave as intended. :) So, call it another thing to keep an eye out for during the experiment.
Another wrinkle to keep in mind is that my script measures one relay at a time. If there are multiple relays running on the same NIC, after my measurement each of them will think they have the full capacity of the NIC. So if we just add up all of the advertised bandwidths after my measurement without considering that some of them share a NIC, that will result in an over-estimate of the available capacity of the network.
To avoid over-estimating network capacity, we could use IP-based heuristics to guess which relays share a machine (e.g., if they share an IP address, or have a nearby IP address). In the long term, it would be nice if Tor would collect and report some sort of machine ID the same way it reports the platform.
More precisely, we're trying to answer the question: "Which small sets of machines are limited by a common network link or shared CPU?"
A machine ID is an incomplete answer to this question: it doesn't deal with VMs, or multiple machines that share a router.
Here are some other potential heuristics:
- clock skew / precise time: machine/VM?
- nearby IP addresses and common ASN: machine?/VM?/router?
- platform: machine
- tor version: operator? (a proxy for machine/VM/router)
Is there a cross-platform API for machine IDs? Or similar APIs for our most common relay platforms? (Linux, BSDs, Windows)
T
tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
On 08.08.2019 14:15, Rob Jansen wrote:
To avoid over-estimating network capacity, we could use IP-based heuristics to guess which relays share a machine (e.g., if they share an IP address, or have a nearby IP address). In the long term, it would be nice if Tor would collect and report some sort of machine ID the same way it reports the platform.
On my servers every instance has its own IP. IPv4 are sometimes _very_ different. Only the IPv6 addresses are similar. (With 2 or 3 at the end)
See https://metrics.torproject.org/rs.html#search/TorOrDie4privacyNET
On 2019-08-06 23:31:39, "Rob Jansen" rob.g.jansen@nrl.navy.mil wrote:
Today, I started running the speedtest on all relays in the network. So far, I have finished about 100 relays (and counting). I expect that the advertised bandwidths reported by metrics will increase over the next few days. For this to happen, the bandwidth histories observed by a relay during my speedtest are first committed to the bandwidth history table (within 24 hours), and then reported in the server descriptors (within 18-36 hours, depending on when the bandwidth history commit happens).
Looks like my relays got measured: https://nerdin.se:12445/index.php/s/5LhwAzP61CHSJZ7 https://nerdin.se:12445/index.php/s/favR4ISXZKgICCa
My connection is 500 Mbit and the measurements got very close to that. Will be interesting to see if the observed BW increases.
On Aug 6, 2019, at 5:31 PM, Rob Jansen rob.g.jansen@nrl.navy.mil wrote:
Over the last 2 days I tested my speedtest on 4 test relays and verified that it does in fact increase relays' advertised bandwidth on Tor metrics.
Today, I started running the speedtest on all relays in the network. So far, I have finished about 100 relays (and counting). I expect that the advertised bandwidths reported by metrics will increase over the next few days.
Update: the measurement finished around 0100 UTC on 2019-08-09. I attempted to measure each relay that appeared in the latest consensus over time. Due to relay churn, this resulted in more measurements than the number of relays in a single consensus.
I attempted 7001 measurements: - 4867 relays were successfully measured for 20 seconds each. - 2134 relays timed out while trying to build the 10 speedtest circuits.
The measurement should be reflected in most server descriptors of successfully measured relays within 36 hours, at about 1300 UTC on 2019-08-10.
Peace, love, and positivity, Rob
Hi,
On 9 Aug 2019, at 23:25, Rob Jansen rob.g.jansen@nrl.navy.mil wrote:
On Aug 6, 2019, at 5:31 PM, Rob Jansen rob.g.jansen@nrl.navy.mil wrote:
Over the last 2 days I tested my speedtest on 4 test relays and verified that it does in fact increase relays' advertised bandwidth on Tor metrics.
Today, I started running the speedtest on all relays in the network. So far, I have finished about 100 relays (and counting). I expect that the advertised bandwidths reported by metrics will increase over the next few days.
Update: the measurement finished around 0100 UTC on 2019-08-09. I attempted to measure each relay that appeared in the latest consensus over time. Due to relay churn, this resulted in more measurements than the number of relays in a single consensus.
I attempted 7001 measurements:
- 4867 relays were successfully measured for 20 seconds each.
- 2134 relays timed out while trying to build the 10 speedtest circuits.
The measurement should be reflected in most server descriptors of successfully measured relays within 36 hours, at about 1300 UTC on 2019-08-10.
It looks like the measurement has increased advertised bandwidths:
Middle: 69% Exit: 72% Guard: 53% Exit and Guard: 28%
https://metrics.torproject.org/bandwidth-flags.html
The growth is mainly in the top 10% of relays:
https://metrics.torproject.org/advbwdist-perc.html?start=2019-05-14&end=...
The IPv6 stats are similar:
Guards with IPv6 ORPort: 47% Exits with IPv6 ORPort: 42% Exits with IPv6Exit: 39%
https://metrics.torproject.org/advbw-ipv6.html
We don't have stats for consumed bandwidth yet, they should arrive in the next 3-5 days.
T
Hi Rob,
On 27 Jul 2019, at 00:18, Rob Jansen rob.g.jansen@nrl.navy.mil wrote:
I am planning on performing an experiment on the Tor network to try to gauge the accuracy of the advertised bandwidths that relays report in their server descriptors. Briefly, the experiment involves running a speed test on every relay for a short time (about 20 seconds). Details follow.
...
Motivation
The capacity of Tor relays (maximum available goodput) is an important metric. Combined with mean goodput, it allows us to compute the bandwidth utilization of individual relays as well as the entire network in aggregate. Generally, capacity is used to help balance client load across relays, and relay utilization rates help Tor make informed decisions about how to allocate resources and prioritize performance and scalability improvements.
Can you define "goodput"? How is it different to the bandwidth reported by a standard speed test? How is it different to the bandwidth measured by sbws?
...
We will conduct the speed tests while minimizing network overhead. We will use a custom client that builds 2-relay circuits. The first relay will be the target relay we are speed testing, and the second relay will be a fast exit relay that we control. We will initiate data streams between a speedtest client and server running on the same machine as our exit relay.
The setup will look like:
speedtest-client <--> tor-client <--> target-relay <--> exit-relay <--> speedtest-server
All components will run on the same machine that we control except for the target-relay, which will rotate as we test different relays in the network. For each target relay, we plan to run the speedtest for 20 seconds in order to increase the probability that the 10 second mean goodput will reach the true capacity. We will measure each relay over a few days to ensure that our speedtest effects are reported by every relay.
Where is your server? How do you expect the location of your server to affect your results?
T
On Jul 31, 2019, at 7:34 PM, teor teor@riseup.net wrote:
Hi Rob,
Hey there!
Can you define "goodput"?
Application-level throughput, i.e., bytes transferred in packet payloads but not counting packet headers or retransmissions. In our case I mean the number of bytes that Tor reports in the BW controller event.
How is it different to the bandwidth reported by a standard speed test?
I believe that iperf also reports goodput as defined above.
How is it different to the bandwidth measured by sbws?
I am not an expert on sbws, but I believe it also measures goodput.
Where is your server?
West coast US.
How do you expect the location of your server to affect your results?
I expect that the packet loss that occurs between my measurement machine and the target may limit the goodput I am able to achieve, and packet loss tends to occur more frequently on links with higher latency. I plan to use multiple sockets (as standard speed testing tools like iperf do) and multiple circuits to try to mitigate the effects.
Note that this is meant to be a fairly simple experiment, not a complete measurement system. Of course I won't be able to measure more than the bandwidth capacity of my measurement machine, but many relays already carry significant load so I'll just be giving them a boost.
Peace, love, and positivity, Rob
Hi again,
On 2 Aug 2019, at 08:18, Rob Jansen rob.g.jansen@nrl.navy.mil wrote:
On Jul 31, 2019, at 7:34 PM, teor teor@riseup.net wrote:
Can you define "goodput"?
Application-level throughput, i.e., bytes transferred in packet payloads but not counting packet headers or retransmissions. In our case I mean the number of bytes that Tor reports in the BW controller event.
How is it different to the bandwidth reported by a standard speed test?
I believe that iperf also reports goodput as defined above.
How is it different to the bandwidth measured by sbws?
I am not an expert on sbws, but I believe it also measures goodput.
Where is your server?
West coast US.
How do you expect the location of your server to affect your results?
I expect that the packet loss that occurs between my measurement machine and the target may limit the goodput I am able to achieve, and packet loss tends to occur more frequently on links with higher latency.
Tor's stream window also limits the goodput of a single stream. The in-flight bandwidth is limited to 500 cells * 498 RELAY_DATA cell goodput bytes = 243 kBytes
I plan to use multiple sockets (as standard speed testing tools like iperf do) and multiple circuits to try to mitigate the effects.
Good. sbws only uses one stream at a time, and its streams are open for 5-10 seconds.
Note that this is meant to be a fairly simple experiment, not a complete measurement system. Of course I won't be able to measure more than the bandwidth capacity of my measurement machine, but many relays already carry significant load so I'll just be giving them a boost.
Sounds like a useful experiment.
If using multiple circuits for 20 seconds makes a significant difference to some relays, we should consider changing sbws to: * use multiple circuits, * use 2 streams per circuit (to fill each circuit window), and * run each test for 20 seconds.
Or we could modify the relay bandwidth self-test to: * use significantly more bandwidth, and try to find the bandwidth limit for each relay, and * run each test for 20 seconds. (The relay bandwidth self-test uses DROP cells on multiple circuits, so stream windows don't apply.)
T
On 7/26/19 4:18 PM, Rob Jansen wrote:
I am planning on performing an experiment on the Tor network to try to gauge the accuracy of the advertised bandwidths that relays report in their server descriptors.
Hi,
does this by any chance caused the lost of the "guard" flag ? Observed here now 2 times for 2 relays running at the same ip [1] and [2] within the last few days.
[1] https://metrics.torproject.org/rs.html#details/509EAB4C5D10C9A9A24B4EA0CE402... [2] https://metrics.torproject.org/rs.html#details/63BF46A63F9C21FD315CD061B3EAA...
Hi,
On 17 Aug 2019, at 18:11, Toralf Förster toralf.foerster@gmx.de wrote:
Signed PGP part On 7/26/19 4:18 PM, Rob Jansen wrote:
I am planning on performing an experiment on the Tor network to try to gauge the accuracy of the advertised bandwidths that relays report in their server descriptors.
Hi,
does this by any chance caused the lost of the "guard" flag ? Observed here now 2 times for 2 relays running at the same ip [1] and [2] within the last few days.
[1] https://metrics.torproject.org/rs.html#details/509EAB4C5D10C9A9A24B4EA0CE402... [2] https://metrics.torproject.org/rs.html#details/63BF46A63F9C21FD315CD061B3EAA...
Yes, changing other relays' bandwidths can affect the Guard flag, because Guard is given to the fastest, most stable relays.
T
On 8/19/19 4:56 AM, teor wrote:
Yes, changing other relays' bandwidths can affect the Guard flag, because Guard is given to the fastest, most stable relays.
I'm not convinced that this is the culprit for the mentioned relay [1].
I found another relay [2] where at least 4 of the 9 authorities doesn't set the "Running" flag, which is needed for "Guard", right? That relay has a reasonable bw value to (23,000 , FWIW the value for [1] is about 90,000).
So now I do wonder why the Running flag is lost after a year.
[1] https://consensus-health.torproject.org/consensus-health-2019-08-24-07-00.ht... [3] https://consensus-health.torproject.org/consensus-health-2019-08-24-08-00.ht...
where
Hi,
On 24 Aug 2019, at 19:38, Toralf Förster toralf.foerster@gmx.de wrote:
On 8/19/19 4:56 AM, teor wrote: Yes, changing other relays' bandwidths can affect the Guard flag, because Guard is given to the fastest, most stable relays.
I'm not convinced that this is the culprit for the mentioned relay [1].
I found another relay [2] where at least 4 of the 9 authorities doesn't set the "Running" flag, which is needed for "Guard", right? That relay has a reasonable bw value to (23,000 , FWIW the value for [1] is about 90,000).
So now I do wonder why the Running flag is lost after a year.
An authority assigns the Running flag to a relay when it can reach that relay on its IPv4 ORPort. If the authority is configured to do IPv6 reachability checks, then it also checks the IPv6 ORPort (if there is one).
[1] https://consensus-health.torproject.org/consensus-health-2019-08-24-07-00.ht...
At the moment, 4/9 authorities can't reach this relay. Maybe its provider is dropping traffic from some routes, or maybe it is overloaded.
[3] https://consensus-health.torproject.org/consensus-health-2019-08-24-08-00.ht...
At the moment, 6/9 authorities can't reach this relay. Same possibilities.
T
On Sun, Aug 25, 2019 at 10:24:21AM +1000, teor wrote:
I found another relay [2] where at least 4 of the 9 authorities doesn't set the "Running" flag, which is needed for "Guard", right?
Correct, I believe we don't vote the Guard flag if we are not voting the Running flag: https://gitweb.torproject.org/torspec.git/tree/dir-spec.txt#n2598 I think that is because of the definition of 'active'.
[3] https://consensus-health.torproject.org/consensus-health-2019-08-24-08-00.ht...
At the moment, 6/9 authorities can't reach this relay. Same possibilities.
moria1 can reach this relay, but earlier today, and also at this moment, dizum is being listed as unable to reach the relay. But I checked with dizum's operator earlier today and he could reach that IP:orport via telnet.
So my current thought is intermittent overload, or perhaps some sort of "rate limiting via iptables" firewall. (It doesn't look like ipv6 reachability questions should apply here, because I think this relay isn't offering ipv6.)
--Roger
On 8/25/19 10:36 AM, Roger Dingledine wrote:
So my current thought is intermittent overload, or perhaps some sort of "rate limiting via iptables" firewall.
Hhm, at least for the "zwiebeltoralf[2]" there's no rate limiting or any firewall rules rate limiting it. But I do have ~80 MByte/sec load at this 1 GBit/s network card (2 relays running at the same ip address), so maybe this is (but why suddenly?) the problem. The load were in the past always > 60 MByte/sec combined for both relays. Well, I do wonder if the latest vanilla kernel version (I do follow the stable series of Greg Kroah-Hartmann) has an impact here?
Again, so a local problem with the system or the provider I do assume.
Hi everybody
Am 2019-08-24 um 11:38 AM schrieb Toralf Förster:
On 8/19/19 4:56 AM, teor wrote:
Yes, changing other relays' bandwidths can affect the Guard flag, because Guard is given to the fastest, most stable relays.
I'm not convinced that this is the culprit for the mentioned relay [1].
I found another relay [2] where at least 4 of the 9 authorities doesn't set the "Running" flag, which is needed for "Guard", right? That relay has a reasonable bw value to (23,000 , FWIW the value for [1] is about 90,000).
So now I do wonder why the Running flag is lost after a year.
[1] https://consensus-health.torproject.org/consensus-health-2019-08-24-07-00.ht... [3] https://consensus-health.torproject.org/consensus-health-2019-08-24-08-00.ht...
I have a similar experience like Toralf. Since a week all my relays (but one) are middle relays. Which is fine because they push good traffic.
All are measured by maatu, gabel, moria1 and farav. But only one is measured by longc and bastet. [1] Why is that?
My understanding is six of nine authorities measure bandwitdh. I can download bandwidth files from all six from a _non_ measured server [2], so connection is given. But I see about 2.4MB doc size from the ones working for me and 4MB from the ones not working. Any clue what is the difference between those?
My family is [3]. The good relay is 402 and the `bad´ ones are between 405 and 415.
Don't get me wrong. If guard, middle or exit all relays are important. But it's a little strange.
PS: dannenberg has Missing Signature
[1] https://consensus-health.torproject.org/consensus-health.html [2] CE47F0356D86CF0A1A2008D97623216D560FB0A8 [3] https://metrics.torproject.org/rs.html#search/family:1AE039EE0B11DB79E4B4B29...
-- Cheers, Felix
Hi,
On 26 Aug 2019, at 00:21, Felix zwiebel@quantentunnel.de wrote:
I found another relay [2] where at least 4 of the 9 authorities doesn't set the "Running" flag, which is needed for "Guard", right? That relay has a reasonable bw value to (23,000 , FWIW the value for [1] is about 90,000).
So now I do wonder why the Running flag is lost after a year.
[1] https://consensus-health.torproject.org/consensus-health-2019-08-24-07-00.ht... [3] https://consensus-health.torproject.org/consensus-health-2019-08-24-08-00.ht...
I have a similar experience like Toralf. Since a week all my relays (but one) are middle relays. Which is fine because they push good traffic.
The Guard flags is affected by the bandwidth authority measurements.
All are measured by maatu, gabel, moria1 and farav. But only one is measured by longc and bastet. [1] Why is that?
longclaw and bastet run sbws, the other bandwidth authorities run torflow.
There are 3 high-priority bugs that make sbws leave some useful relays out of its bandwidth file: https://trac.torproject.org/projects/tor/query?status=!closed&keywords=~...
These bugs stop us deploying sbws to more than 3 authorities.
We expect to have funding to fix these bugs some time in the next month or two.
Even after these changes, Torflow will still have more relays than sbws, because some of the relays that Torflow reports have been down for a long time.
My understanding is six of nine authorities measure bandwitdh. I can download bandwidth files from all six from a _non_ measured server [2], so connection is given. But I see about 2.4MB doc size from the ones working for me and 4MB from the ones not working. Any clue what is the difference between those?
My family is [3]. The good relay is 402 and the `bad´ ones are between 405 and 415.
Don't get me wrong. If guard, middle or exit all relays are important. But it's a little strange.
I don't think the sbws bandwidth authorities are causing the issue that you're seeing with your consensus weight or flags.
The consensus is based on majority votes, and 2/6 bandwidth authorities or 2/9 authorities are not a majority.
T
On 8/26/19 3:14 AM, teor wrote:
We expect to have funding to fix these bugs some time in the next month or two.
So I'll just wait.
FWIW I set "RelayBandwidthRate 30 MBytes" for a day or so to see whether a possible overload of the my relays could cause some trouble but did not see any positive effect so far.
On 27 Aug 2019, at 05:19, Toralf Förster toralf.foerster@gmx.de wrote:
On 8/26/19 3:14 AM, teor wrote: We expect to have funding to fix these bugs some time in the next month or two.
So I'll just wait.
Waiting might not help, if the issue is on your relay:
I don't think the sbws bandwidth authorities are causing the issue that you're seeing with your consensus weight or flags.
The consensus is based on majority votes, and 2/6 bandwidth authorities or 2/9 authorities are not a majority.
FWIW I set "RelayBandwidthRate 30 MBytes" for a day or so to see whether a possible overload of the my relays could cause some trouble but did not see any positive effect so far.
Perhaps your provider is dropping traffic, or has bad peering?
T
On 8/26/19 11:58 PM, teor wrote:
Waiting might not help
Indeed.
The picture is: A bunch of relays, running since a longer time by different operators, are affected. Examples are [1], [2] and [3] The hoster do differ (Hetzner, i3D.net B.V, Host Europe GmbH), the OS too (Linux, Open BSD, FreeBSD), and the country (DE, NL)
So the root cause might be a peer those hosters route their traffic too -or- located in the Tor software -or- ...
It seems there's nothing I can do here to narrow down the issue nor to blame my hoster, or? I do wonder about the number of relays affected, meaning lost their Guard flags around 15th of August and didn't get it back till today?
[1] me : https://metrics.torproject.org/rs.html#details/63BF46A63F9C21FD315CD061B3EAA... [2] Felix: https://metrics.torproject.org/rs.html#details/CE47F0356D86CF0A1A2008D976232... [3] me searched this too: https://metrics.torproject.org/rs.html#details/0CDCFB0B6E1500E57BDD7F240543E...
-- Toralf PGP C4EACDDE 0076E94E
Hi everybody
This is an early report and has to be confirmed over the next days.
Am 2019-08-26 um 11:58 PM schrieb teor:
On 27 Aug 2019, at 05:19, Toralf Förster toralf.foerster@gmx.de wrote:
On 8/26/19 3:14 AM, teor wrote: We expect to have funding to fix these bugs some time in the next month or two.
So I'll just wait.
Waiting might not help, if the issue is on your relay
Trying out different settings on server side had no effect. The relay s/w environment seems to interfere with sbws.
At first: I run serveral relays on Freebsd since longer time and always compile libressl, libevent, zstd and tor(-devel), like:
Jul 15 20:59:17.470 [notice] Tor 0.4.1.3-alpha running on FreeBSD with Libevent 2.1.10-stable, OpenSSL LibreSSL 2.9.2, Zlib 1.2.11, Liblzma 5.2.3, and Libzstd 1.4.0.
This worked fine until Aug/16 when I lost nearly all the relays guard flags:
Consensus Sep/7 ACBBB426CE1D0641A590BF1FC1CF05416FC0FF6F Planetclaire62 Fast !Running Stable V2Dir Valid bw=8200 Fast !Running Stable V2Dir Valid !Fast Running Stable V2Dir Valid Fast !Running Stable V2Dir Valid !Fast Running Stable V2Dir Valid Fast !Running Stable V2Dir Valid Fast Running Stable V2Dir Valid bw=747 Fast Guard HSDir Running Stable V2Dir Valid Fast Guard HSDir Running Stable V2Dir Valid bw=8180 Fast Running Stable V2Dir Valid bw=8180 bwauth=faravahar
No clue why this happend. Since then I searched for the reason. On Sep/14 the change to openssl brought back the guard flag today:
Sep 14 04:20:29.748 [notice] Tor 0.4.1.5 running on FreeBSD with Libevent 2.1.10-stable, OpenSSL 1.0.2s, Zlib 1.2.11, Liblzma 5.2.3, and Libzstd 1.4.0.
Consensus today ACBBB426CE1D0641A590BF1FC1CF05416FC0FF6F Planetclaire62 Fast !Guard Running !Stable V2Dir Valid bw=16000 Fast !Guard Running !Stable V2Dir Valid Fast Guard Running Stable V2Dir Valid bw=14000 Fast !Guard Running !Stable V2Dir Valid Fast Guard Running Stable V2Dir Valid bw=22000 Fast !Guard Running !Stable V2Dir Valid Fast Guard Running Stable V2Dir Valid bw=12200 Fast Guard Running Stable V2Dir Valid Fast Guard Running Stable V2Dir Valid bw=12100 Fast Guard Running Stable V2Dir Valid bw=14000 bwauth=longclaw
I don't think the sbws bandwidth authorities are causing the issue that you're seeing with your consensus weight or flags.
The sbws bandwidth authorities now can measure the bandwidth of the relay.
Can somebody confirm my observation or has prove (please no speculations).
Am 2019-08-28 um 8:09 PM schrieb Toralf Förster:
On 8/26/19 11:58 PM, teor wrote:
Waiting might not help
Indeed.
The picture is: A bunch of relays, running since a longer time by different operators, are affected. Examples are [1], [2] and [3] The hoster do differ (Hetzner, i3D.net B.V, Host Europe GmbH), the OS too (Linux, Open BSD, FreeBSD), and the country (DE, NL)
So the root cause might be a peer those hosters route their traffic too -or- located in the Tor software -or- ...
It seems there's nothing I can do here to narrow down the issue nor to blame my hoster, or? I do wonder about the number of relays affected, meaning lost their Guard flags around 15th of August and didn't get it back till today?
[1] me : https://metrics.torproject.org/rs.html#details/63BF46A63F9C21FD315CD061B3EAA... [2] Felix: https://metrics.torproject.org/rs.html#details/CE47F0356D86CF0A1A2008D976232... [3] me searched this too: https://metrics.torproject.org/rs.html#details/0CDCFB0B6E1500E57BDD7F240543E...
Toralf, may-be your s/w environment shows similar incompatibility to sbws? But may-be it's something different. Good luck!
-- Cheers, Felix
On 9/16/19 9:19 PM, Felix wrote:
On Sep/14 the change to openssl brought back the guard flag today:
Hhm, I installed LibreSSL at:
2019-05-24T18:51:19 >>> dev-libs/libressl-2.9.2: 2 minutes, 39 seconds
so I do not see here a correlation.
On 9/16/19 9:19 PM, Felix wrote:
The sbws bandwidth authorities now can measure the bandwidth of the relay.
Can somebody confirm my observation or has prove (please no speculations).
I upgraded LibreSSL from 2.9.2 to 3.0.0 here at a stable Gentoo Linux and got immediately from all IPv6 capable BW authorties the "ReachableIPv6" flag back at both affected relays.
Am 2019-09-21 um 4:11 PM schrieb Toralf Förster:
On 9/16/19 9:19 PM, Felix wrote:
The sbws bandwidth authorities now can measure the bandwidth of the relay.
Can somebody confirm my observation or has prove (please no speculations).
I upgraded LibreSSL from 2.9.2 to 3.0.0 here at a stable Gentoo Linux and got immediately from all IPv6 capable BW authorties the "ReachableIPv6" flag back at both affected relays.
I have different ssl library setups on the same server (Freebsd):
sbws measurement is working now for Openssl102s/t, Openssl111d and Libressl 300.
sbws measurement is _not_ working now for Libressl 292.
-- Cheers, Felix
Hi,
On 22 Sep 2019, at 16:40, Felix zwiebel@quantentunnel.de wrote:
Am 2019-09-21 um 4:11 PM schrieb Toralf Förster:
On 9/16/19 9:19 PM, Felix wrote:
The sbws bandwidth authorities now can measure the bandwidth of the relay.
Can somebody confirm my observation or has prove (please no speculations).
I upgraded LibreSSL from 2.9.2 to 3.0.0 here at a stable Gentoo Linux and got immediately from all IPv6 capable BW authorties the "ReachableIPv6" flag back at both affected relays.
I have different ssl library setups on the same server (Freebsd):
sbws measurement is working now for Openssl102s/t, Openssl111d and Libressl 300.
sbws measurement is _not_ working now for Libressl 292.
sbws is just a normal tor client, with a custom controller.
We need some more information to diagnose the issue, and answer these questions:
* Is this issue reproducible?
* Are all tor clients affected? * If only some tor clients are affected, why are they affected?
* Are all bandwidth authorities affected, or just the ones running sbws?
* Are these issues actually instances of know sbws bugs?
On 26 Aug 2019, at 11:14, teor teor@riseup.net wrote:
There are 3 high-priority bugs that make sbws leave some useful relays out of its bandwidth file: https://trac.torproject.org/projects/tor/query?status=!closed&keywords=~...
T
Am 2019-09-23 um 1:59 AM schrieb teor:
Hi,
Hi
We need some more information to diagnose the issue, and answer these questions:
- Is this issue reproducible?
In my Freebsd monoculture, yes. 20 guard relays shared the same history:
Tor versions Tor 0.4.0.5, 0.4.1.2-alpha, 0.4.1.3-alpha, all on LibreSSL 2.9.2. Running guard since >1 month before they all lost guard flags between 2019-08-15 10pm and 2019-08-16 1am.
- Are all tor clients affected?
They became middle relays so I expect no client will connect (besides onion services?). But they were pushing a lot of data as middles.
- If only some tor clients are affected, why are they affected?
No idea, sorry.
- Are all bandwidth authorities affected, or just the ones running sbws?
Short: Torflow is ok, sbws not
Consensus for a relay with Libressl 292 maatu. (!running, fast, !guard, bw ok) moria1 (running, fast, guard, bw ok) farav. (running, fast, guard, bw ok) longc. (running, !fast, !guard, no bw) bastet (running, !fast, !guard, no bw)
All relays w/o 292 received quickly running and fast from all bw auths, later guard.
- Are these issues actually instances of know sbws bugs?
I don't think so.
For further testing I keep the relays like this:
All the relays are on the same dedicated server
now working ok: 79D9E66BB2FDBF25E846B635D8248FE1194CFD26 Tor 0.4.1.6, OpenSSL 1.1.1d ACBBB426CE1D0641A590BF1FC1CF05416FC0FF6F Tor 0.4.1.5, OpenSSL 1.0.2s 9F5068310818ED7C70B0BC4087AB55CB12CB4377 Tor 0.4.1.6, LibreSSL 3.0.0 8FA37B93397015B2BC5A525C908485260BE9F422 Tor 0.4.1.5, OpenSSL 1.0.2t
suffering: ED7F2BE5D2AC7FCF821A909E2486FFFB95D65272 Tor 0.4.1.3-alpha, LibreSSL 2.9.2
I hope that helps. Please tell me how I can support.
-- Cheers, Felix
On 24 Sep 2019, at 03:27, Felix zwiebel@quantentunnel.de wrote:
Am 2019-09-23 um 1:59 AM schrieb teor:
We need some more information to diagnose the issue, and answer these questions:
- Is this issue reproducible?
In my Freebsd monoculture, yes. 20 guard relays shared the same history:
Tor versions Tor 0.4.0.5, 0.4.1.2-alpha, 0.4.1.3-alpha, all on LibreSSL 2.9.2. Running guard since >1 month before they all lost guard flags between 2019-08-15 10pm and 2019-08-16 1am.
How do you know it's LibreSSL, and not simply restarting the relays?
- Are all tor clients affected?
They became middle relays so I expect no client will connect (besides onion services?). But they were pushing a lot of data as middles.
Here's what I meant:
Are all Tor instances having trouble connecting to your relays, or just some of them?
You've answered the question below.
- If only some tor clients are affected, why are they affected?
No idea, sorry.
- Are all bandwidth authorities affected, or just the ones running sbws?
Short: Torflow is ok, sbws not
That's not quite accurate.
Consensus for a relay with Libressl 292 maatu. (!running, fast, !guard, bw ok)
The authority on maatuska appears to be affected.
moria1 (running, fast, guard, bw ok) farav. (running, fast, guard, bw ok) longc. (running, !fast, !guard, no bw) bastet (running, !fast, !guard, no bw)
The bandwidth authority clients on longclaw and bastet are affected.
All relays w/o 292 received quickly running and fast from all bw auths, later guard.
Ok, so it does have something to do with LibreSSL. But we don't know why some other Tor instances are having trouble connecting: because it's not only sbws clients which are failing, it's authorities as well.
- Are these issues actually instances of know sbws bugs?
I don't think so.
It doesn't seem so either. This seems like a LibreSSL / Tor bug, not an sbws bug.
For further testing I keep the relays like this:
All the relays are on the same dedicated server
now working ok: 79D9E66BB2FDBF25E846B635D8248FE1194CFD26 Tor 0.4.1.6, OpenSSL 1.1.1d ACBBB426CE1D0641A590BF1FC1CF05416FC0FF6F Tor 0.4.1.5, OpenSSL 1.0.2s 9F5068310818ED7C70B0BC4087AB55CB12CB4377 Tor 0.4.1.6, LibreSSL 3.0.0 8FA37B93397015B2BC5A525C908485260BE9F422 Tor 0.4.1.5, OpenSSL 1.0.2t
suffering: ED7F2BE5D2AC7FCF821A909E2486FFFB95D65272 Tor 0.4.1.3-alpha, LibreSSL 2.9.2
I hope that helps. Please tell me how I can support.
Maybe there is a bug in LibreSSL 2.9.2 ? Or a bug between that version and other SSL libraries?
Can you reproduce this issue using Tor Browser connecting to your relays? If so, what do you see in your Tor logs?
T
On 9/21/19 4:11 PM, Toralf Förster wrote:
I upgraded LibreSSL from 2.9.2 to 3.0.0 here at a stable Gentoo Linux and got immediately from all IPv6 capable BW authorties the "ReachableIPv6" flag back at both affected relays.
Today one of 2 affected relays got its Gurad flag back. The other relay was converted from LibreSSL 2.9.2 to 3.0.0 half a day later - will check tomorrow its status.
tor-relays@lists.torproject.org