This could be a test of whether anyone enforces a moderation policy here.
On 14-01-21 05:59 PM, julien.robin28@free.fr wrote:
Hi Mike,
What you said is very interesting, it was the missing part for me to understand why the weight of the relay in the consensus can drop (or rise again) so quickly (sometimes 3 times per day) without being caused by any change of used bandwith on the network cable of the dedicated machine : the only visible changes in the server bandwith were visible a couple of minutes/hours after changes in consensus weight, and it was proportionnal.
The cause of the problem I encountered will be found here : when the authority servers were doing bandwith measurement on it.
May be an anormally great amount of circuits (depends on the origins and networks of these circuits) were almost unusable : for example if there is 50 percent chance to be measured with very very low bandwith when the authority server did the job. In this case, no error from the algorithm : he did his job - bad bandwith, bad ratio.
With these informations, I would think that my server encounter(ed) some difficulty somewhere in these 2 possible locations :
1-Into the machine itself, causing aleatory bad bandwith on some circuits or circuits cannot establish*, ever with few people connected (quarter of the machine capacity the problem was still present, ever with new identities running alone with pretty slow bandwith, so we can exclude normal TCP/IP socket congestion between my relay and others ones). Also, I had no "Your computer is too slow to handle this many creation requests" while the problem was still present.
2-Difficulty to communicate with a particular point of the Internet network, point that is involved during bandwith measurements by authorities. If the problem is still present I got to verify by observing, on Tor Atlas, other relays that are on the same network service provider - if possible into the same datacenter (Iliad DC3, France, adresses like 88.191.xxx.xxx but may be not only).
Another 2- May be great scale geographic networks problems on my ISP made circuits to have 50 percent chance to work fast and fine (and using all the available bandwith) and 50 percent chance to be slow and unusable, but it looks like a too big affair, I'm not sure it's really possible (and 50 percent is an example value).
Is the measurement method the same for Exit Nodes and Middle/Entry Nodes ?
*What is the decision of the authority algorithm when the relay to measure cannot be established into one or more circuits ?
Thank you in advance ! I will wait and see for the following days or week and keep you informed. Julien ROBIN
PS : while I am there, first fall down (consensus weight fraction divided by 2, 0.137% to 0.067%, now 12100, 0.77%) on ArachnideFR94v2 few minutes ago, (but with such low values, variation are may be normal, we got to wait and see, I will mark the consensus weight values into an excel tab to be sure of what I will see on following days and weeks). Exit probablity from 0.400 to 0.200 :(