Hello,
I'd like to know more details about how exactly the bridge bandwidth
authority works, and if we use the "weight" of each bridge for anything.
For example, I have setup 5 obfs4 bridges, with the exact very same
hardware resources and all on the same network speed of course.
One of them gets used by clients (say 20-50 unique clients every 6 hours
or so) while the rest of 4 are not used at all. This usage is not a
concern for me, as its known bridges take time until they get used,
depending on which bucket they have been assigned and etc. So I assume
it's OK at this particular point in their lifetime to be unused by any
client.
But what I am curious about is, when I search them on RelaySearch, the
used one has a measured bandwidth of over 2 MiB/s (and has the fast
flag) while other 3 unused ones have bandwidths of between 50 and 60
KiB/s (these also have the fast flag) and there is one last one which is
also not used and has a bandwidth of less than 10 KiB/s that does not
have the fast flag. (Fast flag missing is also not my problem, I am just
mentioning it as a side detail).
Now I know for sure those values are not at all in according to the real
environment. Each bridge should be at least capable of 3 MiB/s even if
all 5 are used at the same time at their full speeds. Actually I have
simulated this, it's not just theoretical.
Is there anything related to usage, so that the bridge bandwidth
authority only measures the used bridges? What could have cause such big
discrepancy in my particular case, any ideas?
Also, do we use the weight of each bridge in order to determine how much
% probability it has to be served to a request in the bucket that is
part of, or we don't use bridge weights for anything at all?
Thanks!