Dear Exit relay operators,
first of all thanks for running exit relays!
One of the crucial service that you provide in addition to forwarding TCP streams is DNS resolution for tor clients. Exit relays which fail to resolve hostnames are barely useful for tor clients.
We noticed that lately the failure rates did increase significantly due to some major exit operators apparently having DNS issues and we would like to urge you to visit Arthur's "Tor Exit DNS Timeouts" page that shows you the DNS error rate for exit relays:
https://arthuredelstein.net/exits/ (the page is usually updated once a day)
Please consider checking your DNS if your exit relay consistently shows a non zero timeout rate - and make sure you run an up to date tor version.
If you are an exit operator but have no (or no working) ContactInfo, please consider updating that field in your torrc so we can reach you if something is wrong with your relay.
kind regards nusenu
Hi nusenu
After reading your Mail, I realized that not the DNS records for the exit IPs are failing. Instead this list shows problems to resolve dns on the exit.
I looked at our exit and all looks fine. Resolver works very fast and nothing imporint within the logfile. Only some dudes use 0.100.2.2 as remote address, but let's be fair, that can't work. ;)
There are 4 exits on one machine with one dns server. Only 3 of them are shown in the list: https://metrics.torproject.org/rs.html#search/as:AS205100
Maybe it is a load problem, because this machine has 100% cpu load? :(
A dedicated machine for dns may be good, but currently we have only this one machine. Another way could be to recude exit capacity, but I don't know if it's a good idea to throttle it?
Btw, in the mean time we got more upstream transit and now we are looking to get better / second hardware. But money is a limiting factor. :(
Kind regards Tim
Am Freitag, den 28.06.2019, 20:16 +0000 schrieb nusenu:
Dear Exit relay operators,
first of all thanks for running exit relays!
One of the crucial service that you provide in addition to forwarding TCP streams is DNS resolution for tor clients. Exit relays which fail to resolve hostnames are barely useful for tor clients.
We noticed that lately the failure rates did increase significantly due to some major exit operators apparently having DNS issues and we would like to urge you to visit Arthur's "Tor Exit DNS Timeouts" page that shows you the DNS error rate for exit relays:
https://arthuredelstein.net/exits/ (the page is usually updated once a day)
Please consider checking your DNS if your exit relay consistently shows a non zero timeout rate - and make sure you run an up to date tor version.
If you are an exit operator but have no (or no working) ContactInfo, please consider updating that field in your torrc so we can reach you if something is wrong with your relay.
kind regards nusenu _______________________________________________ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Just set your exit relay DNS to 8.8.8.8 and 1.1.1.1 I mean dns traffic isn't bulk traffic, let google and CloudFlare do the "work"
Thanks,
Matt Westfall President & CIO ECAN Solutions, Inc. Everything Computers and Networks 804.592.1672
------ Original Message ------ From: "Tim Niemeyer" tim@tn-x.org To: tor-relays@lists.torproject.org Sent: 6/29/2019 2:59:34 AM Subject: Re: [tor-relays] exit operators: overall DNS failure rate above 5% - please check your DNS resolver
Hi nusenu
After reading your Mail, I realized that not the DNS records for the exit IPs are failing. Instead this list shows problems to resolve dns on the exit.
I looked at our exit and all looks fine. Resolver works very fast and nothing imporint within the logfile. Only some dudes use 0.100.2.2 as remote address, but let's be fair, that can't work. ;)
There are 4 exits on one machine with one dns server. Only 3 of them are shown in the list: https://metrics.torproject.org/rs.html#search/as:AS205100
Maybe it is a load problem, because this machine has 100% cpu load? :(
A dedicated machine for dns may be good, but currently we have only this one machine. Another way could be to recude exit capacity, but I don't know if it's a good idea to throttle it?
Btw, in the mean time we got more upstream transit and now we are looking to get better / second hardware. But money is a limiting factor. :(
Kind regards Tim
Am Freitag, den 28.06.2019, 20:16 +0000 schrieb nusenu:
Dear Exit relay operators,
first of all thanks for running exit relays!
One of the crucial service that you provide in addition to forwarding TCP streams is DNS resolution for tor clients. Exit relays which fail to resolve hostnames are barely useful for tor clients.
We noticed that lately the failure rates did increase significantly due to some major exit operators apparently having DNS issues and we would like to urge you to visit Arthur's "Tor Exit DNS Timeouts" page that shows you the DNS error rate for exit relays:
https://arthuredelstein.net/exits/ (the page is usually updated once a day)
Please consider checking your DNS if your exit relay consistently shows a non zero timeout rate - and make sure you run an up to date tor version.
If you are an exit operator but have no (or no working) ContactInfo, please consider updating that field in your torrc so we can reach you if something is wrong with your relay.
kind regards nusenu _______________________________________________ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
On Jun 30, 2019, at 8:32 PM, Matt Westfall mwestfall@ecansol.com wrote:
Just set your exit relay DNS to 8.8.8.8 and 1.1.1.1 I mean dns traffic isn't bulk traffic, let google and CloudFlare do the “work"
Utilizing Google DNS (and possibly Cloudflare DNS) provides a significant security flaw that allows outside entities to determine what Tor network users are looking at. Utilizing your own DNS server, a trusted DNS server, or just running Unbound on the same instance is significantly more secure.
Google DNS keeps their logs…Cloudflare claims to wipe after 24 hours, but what’s not known if there’s an open FISA, for example, to continuously turn over Tor originated DNS requests over that 24 hour period.
There’s multiple Open Source Intelligence sources that have developed that governments are doing this exact thing to monitor Tor users, amongst other things. I would say this, a friend of mine who previously worked with the US IC says run Unbound or use trusted DNS.
Thanks,
Conrad Rockenhaus https://www.greyponyit.com/
On Jun 30, 2019, at 8:32 PM, Matt Westfall mwestfall@ecansol.com wrote:
Just set your exit relay DNS to 8.8.8.8 and 1.1.1.1 I mean dns traffic
Screw that MITM.
And unless your on box resolver lib runs nscd cache from rc when using remote dns above, busy exits can also save some bandwidth by running local dns resolver, and some lower latency to tor clients, use security, etc.
On Mon, 01 Jul 2019 01:32:59 +0000 "Matt Westfall" mwestfall@ecansol.com wrote:
Just set your exit relay DNS to 8.8.8.8 and 1.1.1.1 I mean dns traffic isn't bulk traffic, let google and CloudFlare do the "work"
It is considered to be a bad idea privacy-wise: https://medium.com/@nusenu/who-controls-tors-dns-traffic-a74a7632e8ca https://lists.torproject.org/pipermail/tor-relays/2016-May/009255.html https://lists.torproject.org/pipermail/tor-relays/2015-January/006146.html
On Mon, Jul 01, 2019 at 10:06:08AM +0500, Roman Mamedov wrote:
On Mon, 01 Jul 2019 01:32:59 +0000 "Matt Westfall" mwestfall@ecansol.com wrote:
Just set your exit relay DNS to 8.8.8.8 and 1.1.1.1 I mean dns traffic isn't bulk traffic, let google and CloudFlare do the "work"
It is considered to be a bad idea privacy-wise: https://medium.com/@nusenu/who-controls-tors-dns-traffic-a74a7632e8ca https://lists.torproject.org/pipermail/tor-relays/2016-May/009255.html https://lists.torproject.org/pipermail/tor-relays/2015-January/006146.html
Right, this is not recommended as best practice, because we don't want these centralized services to be able to see too large a fraction of exit destinations and timing.
https://freedom-to-tinker.com/2016/09/29/the-effect-of-dns-on-tors-anonymity...
It would be neat for somebody (maybe somebody here?) to be tracking the fraction of exit weights, over time, that are using these centralized dns servers. So we can see whether it's a growing issue or a shrinking issue, to start, and whether we need to reach out to big relay operators or not.
Thanks, --Roger
https://medium.com/@nusenu/who-controls-tors-dns-traffic-a74a7632e8ca https://medium.com/@nusenu/who-controls-tors-dns-traffic-a74a7632e8ca ?
It would be neat for somebody (maybe somebody here?) to be tracking the fraction of exit weights, over time, that are using these centralized dns servers. So we can see whether it's a growing issue or a shrinking issue, to start, and whether we need to reach out to big relay operators or not.
Thanks, --Roger
tor-relays mailing list tor-relays@lists.torproject.org mailto:tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
On Sat, Jun 29, 2019 at 08:59:34AM +0200, Tim Niemeyer wrote:
There are 4 exits on one machine with one dns server. Only 3 of them are shown in the list: https://metrics.torproject.org/rs.html#search/as:AS205100
Looks like all four are listed, when I checked just now.
Maybe it is a load problem, because this machine has 100% cpu load? :(
I see that your exit policy is "reject port 25, accept the rest". So I would guess that you are one of the few exit relays that is getting all of the requests for destination ports that are otherwise rejected in the default exit policy. It will make you very busy.
A dedicated machine for dns may be good, but currently we have only this one machine. Another way could be to recude exit capacity, but I don't know if it's a good idea to throttle it?
I would suggest moving to the default exit policy rather than throttling, if you're going to choose one. You might even find that you can handle even more traffic in that case.
Btw, in the mean time we got more upstream transit and now we are looking to get better / second hardware. But money is a limiting factor. :(
I'd suggest coordinating with the various torservers.net non-profits, to see if any of them are looking to expand and you could affiliate more with them, but it looks like your IP space is already connected to torservers.net, so it sounds like you are on your way there. Still, it might be a way to grow even more.
Thanks! --Roger
I can't really understand why our relays should fail so often because the logs of our DNS daemon don't show anything and I haven't seen the warning about nameservers that failed for a long time...
Maybe the script that checks about DNS failures on Exits is not reporting correctly?
Greetings
On 1 Jul 2019, at 21:41, Tyler Durden virii@enn.lu wrote:
I can't really understand why our relays should fail so often because the logs of our DNS daemon don't show anything and I haven't seen the warning about nameservers that failed for a long time...
Maybe the script that checks about DNS failures on Exits is not reporting correctly?
There are some other options worth considering: * the script is overloading its client, which fails some requests * the exit is overloaded with circuits or streams (and not DNS), so it fails some requests without a DNS query * DNS fails in a way that the exit doesn't detect and log
Tor's DNS support is quite old, and it has had some significant bugs in the past. So I'd start looking there.
It's also worth checking the health of your DNS resolver. Tor exits put an unusual amount of load on DNS: there are lots of requests, for lots of different domains.
T
one of the underlying core issues is the lack of metrics data for relay operators.
I filed the following feature requests to change this:
provide DNS health metrics for tor exit relay operators https://trac.torproject.org/projects/tor/ticket/31290
non-public relay health metrics for operators https://trac.torproject.org/projects/tor/ticket/31291
these are mainly copies from a previous tor-dev email from February 2019: https://lists.torproject.org/pipermail/tor-dev/2019-February/013655.html
Tim Niemeyer:
Maybe it is a load problem, because this machine has 100% cpu load? :(
Generally speaking running a relay at 100% of hardware resources all the time will not make happy users and we should optimize for a smooth tor browser experience more than a high bw or hw resource usage.
I don't think we have to worry about an exit failing 10% of DNS queries for a single day.
Single operators running a significant exit share (>0.5% exit probability) which fail at a high rate (>10%) consistently over multiple days are more relevant.
Since I don't see your exits showing up as failing currently the remainder of this email is not necessarily directed at you directly but more for the general record.
A dedicated machine for dns may be good, but currently we have only this one machine.
I actually believe in running DNS resolvers locally to keep paths short. The resources required for the resolver must be taken into account when planing the capacity of the entire server. The resolver can also require a decent amount of CPU time on fast exits.
In very constraint environments it might still makes sense to run DNS resolvers non-locally (while not using a resolver to far away) since DNS resolvers for exits can also run where exits might not be welcome.
Using a non-local resolver is obviously still better than a local resolver that can not keep up with the load.
Another way could be to recude exit capacity, but I don't know if it's a good idea to throttle it?
With the goal to have happy users (low latency reliable exits):
On a single server with multiple cores and a >1Gbit/s connectivity (server not limited by uplink bw and memory limits) I'd suggest:
1) determine your CPU's single thread performance: measure the peak bandwidth of tor traffic it can manage at a given exit policy running a single instance with no bw limits. Take some ramp-up time into account - which also exists for exits. (use measured data not advertised bandwidth - they can be far appart)
2) determine how many DNS QPS that single tor exit instance generates and what resolver CPU load (peak value after 1-2 weeks of operations)
3) run as many instances as you have cores -1 and set the bw limit (RelayBandwidthRate) in your torrc to ~80% of the peak value from (1) while ensuring that there is enough spare capacity for the resolver and the OS itself
optimize your resolver's performance and cache hit rate by playing with cache size and amount of threads. example for unbound: https://nlnetlabs.nl/documentation/unbound/howto-optimise/
Btw, in the mean time we got more upstream transit and now we are looking to get better / second hardware. But money is a limiting factor. :(
maybe it helps if you clearly communicate that you could easily do X Gbit/s of exit capacity if you only had the necessary hardware and to tell people where to enter their credit card details if they want to see that happen ;)
Am 28.06.19 um 22:16 schrieb nusenu:
Dear Exit relay operators,
first of all thanks for running exit relays!
One of the crucial service that you provide in addition to forwarding TCP streams is DNS resolution for tor clients. Exit relays which fail to resolve hostnames are barely useful for tor clients.
We noticed that lately the failure rates did increase significantly due to some major exit operators apparently having DNS issues and we would like to urge you to visit Arthur's "Tor Exit DNS Timeouts" page that shows you the DNS error rate for exit relays:
https://arthuredelstein.net/exits/ (the page is usually updated once a day)
Please consider checking your DNS if your exit relay consistently shows a non zero timeout rate - and make sure you run an up to date tor version.
If you are an exit operator but have no (or no working) ContactInfo, please consider updating that field in your torrc so we can reach you if something is wrong with your relay.
kind regards nusenu
Dear nusenu,
thank you for your work and reminding.
Apparently the same recommended setup and version produces a high failure rate in one relay while having no issue with another at the same AS.
Even within the same relay but different instances you can experience a failure rate that is double the one in the other instance.
Any idea why this is the case?
Kind regards
Paul
Moin
I just played a bit with the sources of this test system.
At first I didn't get it work, but then I changed the hard coded guard to one of my own and voila ..
I picked an exit with an error rate of 100%: 0FF233C8D78A17B8DB7C8257D2E05CD5AA7C6B88
.. the test resultet in many many "SUCCEEDED".
--- %< --- 1/10: 1/1 $D6D6B6614C9EF2DAD13AC0C94487AD8ED3B6877F : [('SUCCEEDED', '2019-07-05 23:29:41.927196', 0.5837132930755615)] 2/10: 1/1 $D6D6B6614C9EF2DAD13AC0C94487AD8ED3B6877F : [('SUCCEEDED', '2019-07-05 23:29:41.927196', 0.5837132930755615), ('SUCCEEDED', '2019-07-05 23:29:42.613310', 0.4149782657623291)] 3/10: 1/1 $D6D6B6614C9EF2DAD13AC0C94487AD8ED3B6877F : [('SUCCEEDED', '2019-07-05 23:29:41.927196', 0.5837132930755615), ('SUCCEEDED', '2019-07-05 23:29:42.613310', 0.4149782657623291), ('SUCCEEDED', '2019-07-05 23:29:43.313887', 0.40435171127319336)] 4/10: 1/1 $D6D6B6614C9EF2DAD13AC0C94487AD8ED3B6877F : [('SUCCEEDED', '2019-07-05 23:29:41.927196', 0.5837132930755615), ('SUCCEEDED', '2019-07-05 23:29:42.613310', 0.4149782657623291), ('SUCCEEDED', '2019-07-05 23:29:43.313887', 0.40435171127319336), ('SUCCEEDED', '2019-07-05 23:29:43.914470', 0.3811912536621094)] 5/10: 1/1 $D6D6B6614C9EF2DAD13AC0C94487AD8ED3B6877F : [('SUCCEEDED', '2019-07-05 23:29:41.927196', 0.5837132930755615), ('SUCCEEDED', '2019-07-05 23:29:42.613310', 0.4149782657623291), ('SUCCEEDED', '2019-07-05 23:29:43.313887', 0.40435171127319336), ('SUCCEEDED', '2019-07-05 23:29:43.914470', 0.3811912536621094), ('SUCCEEDED', '2019-07-05 23:29:44.609796', 0.5054607391357422)] 6/10: 1/1 $D6D6B6614C9EF2DAD13AC0C94487AD8ED3B6877F : [('SUCCEEDED', '2019-07-05 23:29:41.927196', 0.5837132930755615), ('SUCCEEDED', '2019-07-05 23:29:42.613310', 0.4149782657623291), ('SUCCEEDED', '2019-07-05 23:29:43.313887', 0.40435171127319336), ('SUCCEEDED', '2019-07-05 23:29:43.914470', 0.3811912536621094), ('SUCCEEDED', '2019-07-05 23:29:44.609796', 0.5054607391357422), ('SUCCEEDED', '2019-07-05 23:29:45.690206', 0.7719564437866211)] 7/10: 1/1 $D6D6B6614C9EF2DAD13AC0C94487AD8ED3B6877F : [('SUCCEEDED', '2019-07-05 23:29:41.927196', 0.5837132930755615), ('SUCCEEDED', '2019-07-05 23:29:42.613310', 0.4149782657623291), ('SUCCEEDED', '2019-07-05 23:29:43.313887', 0.40435171127319336), ('SUCCEEDED', '2019-07-05 23:29:43.914470', 0.3811912536621094), ('SUCCEEDED', '2019-07-05 23:29:44.609796', 0.5054607391357422), ('SUCCEEDED', '2019-07-05 23:29:45.690206', 0.7719564437866211), ('SUCCEEDED', '2019-07-05 23:29:46.263253', 0.4417731761932373)] 8/10: 1/1 $D6D6B6614C9EF2DAD13AC0C94487AD8ED3B6877F : [('SUCCEEDED', '2019-07-05 23:29:41.927196', 0.5837132930755615), ('SUCCEEDED', '2019-07-05 23:29:42.613310', 0.4149782657623291), ('SUCCEEDED', '2019-07-05 23:29:43.313887', 0.40435171127319336), ('SUCCEEDED', '2019-07-05 23:29:43.914470', 0.3811912536621094), ('SUCCEEDED', '2019-07-05 23:29:44.609796', 0.5054607391357422), ('SUCCEEDED', '2019-07-05 23:29:45.690206', 0.7719564437866211), ('SUCCEEDED', '2019-07-05 23:29:46.263253', 0.4417731761932373), ('SUCCEEDED', '2019-07-05 23:29:47.031197', 0.5484879016876221)] 9/10: 1/1 $D6D6B6614C9EF2DAD13AC0C94487AD8ED3B6877F : [('SUCCEEDED', '2019-07-05 23:29:41.927196', 0.5837132930755615), ('SUCCEEDED', '2019-07-05 23:29:42.613310', 0.4149782657623291), ('SUCCEEDED', '2019-07-05 23:29:43.313887', 0.40435171127319336), ('SUCCEEDED', '2019-07-05 23:29:43.914470', 0.3811912536621094), ('SUCCEEDED', '2019-07-05 23:29:44.609796', 0.5054607391357422), ('SUCCEEDED', '2019-07-05 23:29:45.690206', 0.7719564437866211), ('SUCCEEDED', '2019-07-05 23:29:46.263253', 0.4417731761932373), ('SUCCEEDED', '2019-07-05 23:29:47.031197', 0.5484879016876221), ('SUCCEEDED', '2019-07-05 23:29:47.718230', 0.4196751117706299)] 10/10: 1/1 $D6D6B6614C9EF2DAD13AC0C94487AD8ED3B6877F : [('SUCCEEDED', '2019-07-05 23:29:41.927196', 0.5837132930755615), ('SUCCEEDED', '2019-07-05 23:29:42.613310', 0.4149782657623291), ('SUCCEEDED', '2019-07-05 23:29:43.313887', 0.40435171127319336), ('SUCCEEDED', '2019-07-05 23:29:43.914470', 0.3811912536621094), ('SUCCEEDED', '2019-07-05 23:29:44.609796', 0.5054607391357422), ('SUCCEEDED', '2019-07-05 23:29:45.690206', 0.7719564437866211), ('SUCCEEDED', '2019-07-05 23:29:46.263253', 0.4417731761932373), ('SUCCEEDED', '2019-07-05 23:29:47.031197', 0.5484879016876221), ('SUCCEEDED', '2019-07-05 23:29:47.718230', 0.4196751117706299), ('SUCCEEDED', '2019-07-05 23:29:48.309022', 0.44235873222351074)] --- >% ---
My Patch: --- %< --- diff --git a/relay_perf.py b/relay_perf.py index 52b5444..cb54371 100644 --- a/relay_perf.py +++ b/relay_perf.py @@ -14,7 +14,7 @@ from twisted.web.client import readBody def write_json(filestem, data): now = datetime.datetime.now().strftime("%Y%m%d_%H%M"); - print(data) + #print(data) jsonStr = json.dumps(data) with open(filestem + "_" + now + ".json", "w") as f: f.write(jsonStr) @@ -103,11 +103,14 @@ async def _main(reactor): config.save() routers = state.all_routers - guard1 = state.routers_by_hash["$F6740DEABFD5F62612FA025A5079EA72846B1F67"] + guard1 = state.routers_by_hash["$9973E1E9730A58FDBA9E112D2B3342D2C0D921B5"] exits = list(filter(lambda router: "exit" in router.flags, routers)) + exits = list(filter(lambda router: "0FF233C8D78A17B8DB7C8257D2E05CD5AA7C6B88" in router.unique_name, exits)) exit_results = await test_exits(reactor, state, socks, guard1, exits, 10) exit_results["_relays"] = relay_data(True) - write_json("../all_exit_results/exit_results", exit_results) + write_json("exit_results.json", exit_results) + + return exit_node = state.routers_by_hash["$1AE949967F82BBE7534A3D6BA77A7EBE1CED4369"] relays = list(filter(lambda router: "exit" not in router.flags, routers)) --- >% ---
Regrads.. Tim
Am Freitag, den 28.06.2019, 20:16 +0000 schrieb nusenu:
Dear Exit relay operators,
first of all thanks for running exit relays!
One of the crucial service that you provide in addition to forwarding TCP streams is DNS resolution for tor clients. Exit relays which fail to resolve hostnames are barely useful for tor clients.
We noticed that lately the failure rates did increase significantly due to some major exit operators apparently having DNS issues and we would like to urge you to visit Arthur's "Tor Exit DNS Timeouts" page that shows you the DNS error rate for exit relays:
https://arthuredelstein.net/exits/ (the page is usually updated once a day)
Please consider checking your DNS if your exit relay consistently shows a non zero timeout rate - and make sure you run an up to date tor version.
If you are an exit operator but have no (or no working) ContactInfo, please consider updating that field in your torrc so we can reach you if something is wrong with your relay.
kind regards nusenu _______________________________________________ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
Thanks for confirmation! That's what I was suspecting.
Tim Niemeyer:
Moin
I just played a bit with the sources of this test system.
At first I didn't get it work, but then I changed the hard coded guard to one of my own and voila ..
I picked an exit with an error rate of 100%: 0FF233C8D78A17B8DB7C8257D2E05CD5AA7C6B88
.. the test resultet in many many "SUCCEEDED".
--- %< --- 1/10: 1/1 $D6D6B6614C9EF2DAD13AC0C94487AD8ED3B6877F : [('SUCCEEDED', '2019-07-05 23:29:41.927196', 0.5837132930755615)] 2/10: 1/1 $D6D6B6614C9EF2DAD13AC0C94487AD8ED3B6877F : [('SUCCEEDED', '2019-07-05 23:29:41.927196', 0.5837132930755615), ('SUCCEEDED', '2019-07-05 23:29:42.613310', 0.4149782657623291)] 3/10: 1/1 $D6D6B6614C9EF2DAD13AC0C94487AD8ED3B6877F : [('SUCCEEDED', '2019-07-05 23:29:41.927196', 0.5837132930755615), ('SUCCEEDED', '2019-07-05 23:29:42.613310', 0.4149782657623291), ('SUCCEEDED', '2019-07-05 23:29:43.313887', 0.40435171127319336)] 4/10: 1/1 $D6D6B6614C9EF2DAD13AC0C94487AD8ED3B6877F : [('SUCCEEDED', '2019-07-05 23:29:41.927196', 0.5837132930755615), ('SUCCEEDED', '2019-07-05 23:29:42.613310', 0.4149782657623291), ('SUCCEEDED', '2019-07-05 23:29:43.313887', 0.40435171127319336), ('SUCCEEDED', '2019-07-05 23:29:43.914470', 0.3811912536621094)] 5/10: 1/1 $D6D6B6614C9EF2DAD13AC0C94487AD8ED3B6877F : [('SUCCEEDED', '2019-07-05 23:29:41.927196', 0.5837132930755615), ('SUCCEEDED', '2019-07-05 23:29:42.613310', 0.4149782657623291), ('SUCCEEDED', '2019-07-05 23:29:43.313887', 0.40435171127319336), ('SUCCEEDED', '2019-07-05 23:29:43.914470', 0.3811912536621094), ('SUCCEEDED', '2019-07-05 23:29:44.609796', 0.5054607391357422)] 6/10: 1/1 $D6D6B6614C9EF2DAD13AC0C94487AD8ED3B6877F : [('SUCCEEDED', '2019-07-05 23:29:41.927196', 0.5837132930755615), ('SUCCEEDED', '2019-07-05 23:29:42.613310', 0.4149782657623291), ('SUCCEEDED', '2019-07-05 23:29:43.313887', 0.40435171127319336), ('SUCCEEDED', '2019-07-05 23:29:43.914470', 0.3811912536621094), ('SUCCEEDED', '2019-07-05 23:29:44.609796', 0.5054607391357422), ('SUCCEEDED', '2019-07-05 23:29:45.690206', 0.7719564437866211)] 7/10: 1/1 $D6D6B6614C9EF2DAD13AC0C94487AD8ED3B6877F : [('SUCCEEDED', '2019-07-05 23:29:41.927196', 0.5837132930755615), ('SUCCEEDED', '2019-07-05 23:29:42.613310', 0.4149782657623291), ('SUCCEEDED', '2019-07-05 23:29:43.313887', 0.40435171127319336), ('SUCCEEDED', '2019-07-05 23:29:43.914470', 0.3811912536621094), ('SUCCEEDED', '2019-07-05 23:29:44.609796', 0.5054607391357422), ('SUCCEEDED', '2019-07-05 23:29:45.690206', 0.7719564437866211), ('SUCCEEDED', '2019-07-05 23:29:46.263253', 0.4417731761932373)] 8/10: 1/1 $D6D6B6614C9EF2DAD13AC0C94487AD8ED3B6877F : [('SUCCEEDED', '2019-07-05 23:29:41.927196', 0.5837132930755615), ('SUCCEEDED', '2019-07-05 23:29:42.613310', 0.4149782657623291), ('SUCCEEDED', '2019-07-05 23:29:43.313887', 0.40435171127319336), ('SUCCEEDED', '2019-07-05 23:29:43.914470', 0.3811912536621094), ('SUCCEEDED', '2019-07-05 23:29:44.609796', 0.5054607391357422), ('SUCCEEDED', '2019-07-05 23:29:45.690206', 0.7719564437866211), ('SUCCEEDED', '2019-07-05 23:29:46.263253', 0.4417731761932373), ('SUCCEEDED', '2019-07-05 23:29:47.031197', 0.5484879016876221)] 9/10: 1/1 $D6D6B6614C9EF2DAD13AC0C94487AD8ED3B6877F : [('SUCCEEDED', '2019-07-05 23:29:41.927196', 0.5837132930755615), ('SUCCEEDED', '2019-07-05 23:29:42.613310', 0.4149782657623291), ('SUCCEEDED', '2019-07-05 23:29:43.313887', 0.40435171127319336), ('SUCCEEDED', '2019-07-05 23:29:43.914470', 0.3811912536621094), ('SUCCEEDED', '2019-07-05 23:29:44.609796', 0.5054607391357422), ('SUCCEEDED', '2019-07-05 23:29:45.690206', 0.7719564437866211), ('SUCCEEDED', '2019-07-05 23:29:46.263253', 0.4417731761932373), ('SUCCEEDED', '2019-07-05 23:29:47.031197', 0.5484879016876221), ('SUCCEEDED', '2019-07-05 23:29:47.718230', 0.4196751117706299)] 10/10: 1/1 $D6D6B6614C9EF2DAD13AC0C94487AD8ED3B6877F : [('SUCCEEDED', '2019-07-05 23:29:41.927196', 0.5837132930755615), ('SUCCEEDED', '2019-07-05 23:29:42.613310', 0.4149782657623291), ('SUCCEEDED', '2019-07-05 23:29:43.313887', 0.40435171127319336), ('SUCCEEDED', '2019-07-05 23:29:43.914470', 0.3811912536621094), ('SUCCEEDED', '2019-07-05 23:29:44.609796', 0.5054607391357422), ('SUCCEEDED', '2019-07-05 23:29:45.690206', 0.7719564437866211), ('SUCCEEDED', '2019-07-05 23:29:46.263253', 0.4417731761932373), ('SUCCEEDED', '2019-07-05 23:29:47.031197', 0.5484879016876221), ('SUCCEEDED', '2019-07-05 23:29:47.718230', 0.4196751117706299), ('SUCCEEDED', '2019-07-05 23:29:48.309022', 0.44235873222351074)] --- >% ---
My Patch: --- %< --- diff --git a/relay_perf.py b/relay_perf.py index 52b5444..cb54371 100644 --- a/relay_perf.py +++ b/relay_perf.py @@ -14,7 +14,7 @@ from twisted.web.client import readBody def write_json(filestem, data): now = datetime.datetime.now().strftime("%Y%m%d_%H%M"); - print(data) + #print(data) jsonStr = json.dumps(data) with open(filestem + "_" + now + ".json", "w") as f: f.write(jsonStr) @@ -103,11 +103,14 @@ async def _main(reactor): config.save() routers = state.all_routers - guard1 = state.routers_by_hash["$F6740DEABFD5F62612FA025A5079EA72846B1F67"] + guard1 = state.routers_by_hash["$9973E1E9730A58FDBA9E112D2B3342D2C0D921B5"] exits = list(filter(lambda router: "exit" in router.flags, routers)) + exits = list(filter(lambda router: "0FF233C8D78A17B8DB7C8257D2E05CD5AA7C6B88" in router.unique_name, exits)) exit_results = await test_exits(reactor, state, socks, guard1, exits, 10) exit_results["_relays"] = relay_data(True) - write_json("../all_exit_results/exit_results", exit_results) + write_json("exit_results.json", exit_results)
+ return exit_node = state.routers_by_hash["$1AE949967F82BBE7534A3D6BA77A7EBE1CED4369"] relays = list(filter(lambda router: "exit" not in router.flags, routers)) --- >% ---
Regrads.. Tim
Am Freitag, den 28.06.2019, 20:16 +0000 schrieb nusenu:
Dear Exit relay operators,
first of all thanks for running exit relays!
One of the crucial service that you provide in addition to forwarding TCP streams is DNS resolution for tor clients. Exit relays which fail to resolve hostnames are barely useful for tor clients.
We noticed that lately the failure rates did increase significantly due to some major exit operators apparently having DNS issues and we would like to urge you to visit Arthur's "Tor Exit DNS Timeouts" page that shows you the DNS error rate for exit relays:
https://arthuredelstein.net/exits/ (the page is usually updated once a day)
Please consider checking your DNS if your exit relay consistently shows a non zero timeout rate - and make sure you run an up to date tor version.
If you are an exit operator but have no (or no working) ContactInfo, please consider updating that field in your torrc so we can reach you if something is wrong with your relay.
kind regards nusenu _______________________________________________ tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
tor-relays mailing list tor-relays@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays
tor-relays@lists.torproject.org