Hi,
I wrote down a spec for a simple web of trust for relay operator IDs:
https://gitlab.torproject.org/nusenu/torspec/-/blob/simple-wot-for-relay-ope...
This is related to: https://gitlab.torproject.org/tpo/network-health/metrics/relay-search/-/issu... https://lists.torproject.org/pipermail/tor-relays/2020-July/018656.html
kind regards, nusenu
On 03 Oct (18:16:05), nusenu wrote:
Hi,
I wrote down a spec for a simple web of trust for relay operator IDs:
https://gitlab.torproject.org/nusenu/torspec/-/blob/simple-wot-for-relay-ope...
Hi nusenu!
Maybe you would like to open a merge request or post it on tor-dev in its entirety so we can comment? Whatever you prefer.
Thanks! David
Maybe you would like to open a merge request or post it on tor-dev in its entirety so we can comment? Whatever you prefer.
https://gitlab.torproject.org/tpo/core/torspec/-/merge_requests/49
While I understand the rationale for proposals such as these and agree there is a problem with malicious relays on the network, I feel that proposals such as these:
- Raise the barrier for entry. People that would like to contribute to the network by running a relay or several relays would have this extra administrative burden now
- These extra verification steps and collected details nibble-away ones ability to contribute to the network anonymously.
- Despite individuals' best intent, systems and processes for collection and aggregation of personal details often have vulnerabilities. These vulnerabilities, when exported could be used to harm the very people the project is designed to protect.
Z
Sent with ProtonMail Secure Email.
‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Sunday, October 3rd, 2021 at 12:16 PM, nusenu nusenu-lists@riseup.net wrote:
Hi,
I wrote down a spec for a simple web of trust
for relay operator IDs:
https://gitlab.torproject.org/nusenu/torspec/-/blob/simple-wot-for-relay-ope...
This is related to:
https://gitlab.torproject.org/tpo/network-health/metrics/relay-search/-/issu...
https://lists.torproject.org/pipermail/tor-relays/2020-July/018656.html
kind regards,
nusenu
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
(sorry for replying directly before) On 2021-10-03 16:16, nusenu-lists at riseup.net wrote:
Hi,
I wrote down a spec for a simple web of trust for relay operator IDs:
Some comments, in no particular order:
Why not just put the keys in directly, or even a magnet link to your latest web of trust? That would remove the need to trust SSL CAs.
What problems does this solve, specifically, and how? If I - me personally, not the generic I - wanted to spin up a relay, how would I do that?
Would I go on this mailing list and ask random people to sign my relay? If so, it's not very useful.
Or would I just run it without any signatures at all? If so, it's not very useful.
The basic problem, I think, is the same as for PGP: it's not really clear what you're attesting to when you sign. If I sign a my mate's relay, and then that relay turns out to be dodgy, do I also lose my relay operation privileges?
I think that WoT systems have a definite value for preventing Sybil attacks, they are very powerful, and I don't think these issues are insurmountable, but they have to be addressed.
If you're going to do it in a "machine-friendly" manner, then I suppose you have to come up with some kind of formalized notion of what trust represents, maybe have some numerical scale so you can define (just as an example) 100 = "I've personally audited the hardware", 70 = "This is an organization I trust", 10 = "I know who this person is, it's not just a fresh hotmail".
Or, you can do it in a "human-friendly" manner, where you just write text notes with each trust relationship. That would make it quite useless to parse, but could be useful to give us some information about relays.
Now, here's my gut feeling:
Instinctively, it seems silly to have the trust relationships denote "this person is a good relay operator" (how would you even quantify that?), and maybe more reasonable to have it denote "I know this guy, he didn't just pop into existence last Thursday". And if you're doing that, it seems like the second approach makes more sense. This clearly suggests some limitations to it, but possibly still useful.
Anyway, if you're going to do that, it might also be reasonable to hook into a pre-existing web of trust, like GPG or something. That way, we can encode stuff like "I trust my mate Alice, she isn't a relay operator, she trusts Bob, who is, therefore I transitively trust Bob." This doesn't work great if Alice has to register in the separate Tor Web of Trust thing. (On the other hand, we introduce the problem of someone doing a Sybil by being introduced to random people who will sign literally anything, not being aware of Tor, and then showing up with plausible-looking trust pairs. But maybe that's not such a big problem, because that arguably looks even shadier?)
I think this is a very good initiative, anyway.
Hi,
thanks for your input.
There will be a new iteration of the draft and I will reply to your email again once that is done, as it should cover some of the areas you mentioned.
kind regards, nusenu
yanmaani@cock.li:
(sorry for replying directly before) On 2021-10-03 16:16, nusenu-lists at riseup.net wrote:
Hi,
I wrote down a spec for a simple web of trust for relay operator IDs:
Some comments, in no particular order:
Why not just put the keys in directly, or even a magnet link to your latest web of trust? That would remove the need to trust SSL CAs.
What problems does this solve, specifically, and how? If I - me personally, not the generic I - wanted to spin up a relay, how would I do that?
Would I go on this mailing list and ask random people to sign my relay? If so, it's not very useful.
Or would I just run it without any signatures at all? If so, it's not very useful.
The basic problem, I think, is the same as for PGP: it's not really clear what you're attesting to when you sign. If I sign a my mate's relay, and then that relay turns out to be dodgy, do I also lose my relay operation privileges?
I think that WoT systems have a definite value for preventing Sybil attacks, they are very powerful, and I don't think these issues are insurmountable, but they have to be addressed.
If you're going to do it in a "machine-friendly" manner, then I suppose you have to come up with some kind of formalized notion of what trust represents, maybe have some numerical scale so you can define (just as an example) 100 = "I've personally audited the hardware", 70 = "This is an organization I trust", 10 = "I know who this person is, it's not just a fresh hotmail".
Or, you can do it in a "human-friendly" manner, where you just write text notes with each trust relationship. That would make it quite useless to parse, but could be useful to give us some information about relays.
Now, here's my gut feeling:
Instinctively, it seems silly to have the trust relationships denote "this person is a good relay operator" (how would you even quantify that?), and maybe more reasonable to have it denote "I know this guy, he didn't just pop into existence last Thursday". And if you're doing that, it seems like the second approach makes more sense. This clearly suggests some limitations to it, but possibly still useful.
Anyway, if you're going to do that, it might also be reasonable to hook into a pre-existing web of trust, like GPG or something. That way, we can encode stuff like "I trust my mate Alice, she isn't a relay operator, she trusts Bob, who is, therefore I transitively trust Bob." This doesn't work great if Alice has to register in the separate Tor Web of Trust thing. (On the other hand, we introduce the problem of someone doing a Sybil by being introduced to random people who will sign literally anything, not being aware of Tor, and then showing up with plausible-looking trust pairs. But maybe that's not such a big problem, because that arguably looks even shadier?)
I think this is a very good initiative, anyway.
current version of the text: https://nusenu.github.io/tor-relay-operator-ids-trust-information/
Some comments, in no particular order:
Why not just put the keys in directly, or even a magnet link to your latest web of trust? That would remove the need to trust SSL CAs.
Since the spec does not mention keys, which keys do you mean?
Note that the level of indirection trust information -> operator ID -> relay ID is crucial. Anything that requires to assign trust to individual relays does not really scale well and we trust relays largely by trusting their operators (less on other factors).
What problems does this solve, specifically, and how? If I - me personally, not the generic I - wanted to spin up a relay, how would I do that?
Would I go on this mailing list and ask random people to sign my relay? If so, it's not very useful.
Or would I just run it without any signatures at all? If so, it's not very useful.
The basic problem, I think, is the same as for PGP: it's not really clear what you're attesting to
I've tried to make it more clear now and I've added a second point:
a TA asserts that (1) a given operator ID is running relays without malicious intent (2) they have met at least once in a physical setting (not just online)
https://github.com/nusenu/tor-relay-operator-ids-trust-information/blob/main...
If I sign a my mate's relay,
Note: there is no manual signing and no trust at the individual relay level in the spec.
and then that relay turns out to be dodgy, do I also lose my relay operation privileges?
No, but you will likely loose people's trust to assert a third parties trust level. So if you were a TA for someone before you probably loose that ability, but it is up to the consumer of trust information to define their rules for which TA to trust and how to respond to TA "errors".
Thanks to your input I added support for negative trust configurations: https://nusenu.github.io/tor-relay-operator-ids-trust-information/#negative-...
If you're going to do it in a "machine-friendly" manner, then I suppose you have to come up with some kind of formalized notion of what trust represents, maybe have some numerical scale so you can define (just as an example) 100 = "I've personally audited the hardware", 70 = "This is an organization I trust", 10 = "I know who this person is, it's not just a fresh hotmail".
currently, by publishing an operator-id (=domain) a TA only claims that "this operator runs relays without malicious intent" and that they met at least once. It does not say anything about the operational security practices of an operator.
Having a granularity of 100 steps to denote the trust level is too much in my opinion. Let's keep it simple.
Anyway, if you're going to do that, it might also be reasonable to hook into a pre-existing web of trust, like GPG or something. That way, we can encode stuff like "I trust my mate Alice, she isn't a relay operator, she trusts Bob, who is, therefore I transitively trust Bob."
I don't think there is much benefit in using existing GPG signatures because signatures on GPG keys only make claims about identities, they do not make any claims about non-malicious relay operator intentions. Malicious operators are willing to go quite far as we see in practice. Finding a poor person who is willing to go to the next GPG key signing event for money is trivial for them I guess.
kind regards, nusenu