Hi all,
Since the first post, I've received some comments off-list and consequently added more sections to the proposal. Particularly, it adds a detailed description of the messages that needs to be passed down between the witnesses, paragraphs to answer some questions, and a link to the official paper.
I hope it gets clearer and answer most of your questions :)
Please, feel free to ask questions on/off list, I'd be happy to discuss about it anytime ;) (irc: nikkolasg)
Thanks,
Nicolas
version 0.2: - 3.2.1 Setup : explicitly using Fallback directory mirrors as witnesses - 3.2.2 Operation: what does a witness that refuse to sign do - 3.2.3 added incremental deployment section - 3.3 : added evolution of the CoSi set of witnesses section - 3.1 added signature description - 5 Specifications: 5.1: Protocol 5.2: Format
------------------------------------------------------------------------------------------ Filename: tor_cosi.txt Title: Tor Cosi Author: Nicolas GAILLY, DeDiS lab, EPFL Created: 09.03.2016 Status: draft Version: 0.2
0. Introduction
This document describes how to provide and use a decentralized witness cosigning mechanism in order to gain proactive transparency and public accountability for the Tor consensus documents. Directory Authorities (DA) send their document to this set of witnesses and embed the signature within the document. Tor relays and clients can choose to refuse a consensus document if it has not been accepted and signed by a threshold of witnesses.
1. Overview
A weakness of the current DA system is that if any 5 of the 9 DAs’ keys are stolen or coerced they could be used to sign fake directories that the attacker might use secretly in another part of the world to compromise Tor clients in the attacker’s domain. We propose to address this class of attacks by incorporating decentralized witness cosigning (CoSi) into the directory signing process, which ensures that any consensus document must be not only signed by appropriate DAs, but also publicly witnessed, signed and logged by a larger group of servers acting as witnesses, before clients will accept the directory.
A Tor relay or client expects to receive an additional "CoSi" signature alongside the consensus document. They verify if the signature is correct and whether a sufficient number of witnesses attested that the consensus document is valid or not. The Tor project would fix such a threshold in the default configuration but any user or relay is free to adjust this value to its own need. In order to verify the signature, they need to have the individuals public keys of all the witnesses beforehand. This "CoSi certificate" can be embedded in the software in the same way certificate pinning does.
2. Motivation
Tor's DAs are comprised of 9 servers (and one extra for the bridges): four of them are in the US and 5 of them are in the EU. Attacking these central and vital points of the Tor network is clearly within the reach of state level adversaries if they were to collaborate. Recent stories about surveillance show that such a collaboration is already happening.
For example, let's imagine a plausible situation where a state-level attacker secretly coerces and/or steals the private keys of 5 of the 9 DAs, takes them back to the Republic of Tyrannia where they control the ISPs and the country’s Internet connectivity to the rest of the world. The government embeds those keys in their “Great Firewall” type devices, and uses them to secretly MITM attack targeted Tor users within Tyrannia by giving them correctly-signed but completely false views of the Tor directory in which all of the available relays are run by the Tyrannian authorities. Since this attack does not attack the consensus documents that the legitimate DAs are regularly broadcasting to the rest of the world, neither the Tor project nor anyone else outside of Tyrannia will have the opportunity to see or become aware of the fake consensus documents, and not many people even inside Tyrannia might have the opportunity to detect the attack if it is carefully targeted against the small number of suspected activists and journalists the government does not like.
The main goal of CoSi applied in Tor should be to ensure consensus document transparency: that is, ensure the property that any consensus document that any Tor client anywhere will accept has been observed and logged by a significant number of parties throughout the world, so that any misuse of a quorum of 5 DA keys anywhere will be quickly detectable (soon, if not necessarily during the signing process itself).
3. Design
The first part will talk more about the architecture of a "CoSi" system and the second part will go into more detail about how such a system can be integrated in Tor. For more details, please refer to the official paper [0].
3.1 Simplified Architecture
The designated leader arranges all the witnesses in a tree. The public keys of each participants are aggregated to form an aggregate public key. The tree is only there for performance reasons and can be reconfigured at any point in time without affecting security (the latest experiments showed that we could run up to 8000 nodes with a 2 second delay for generating a signature). Once the tree is constructed, its "certificate" represents the aggregate public key and all its individual public keys of each witnesses. At the end of a signature round, the client will be able to verify the CoSi signature (which uses Schnorr signatures) using the aggregate public key.
A CoSi signature have two components: - Schnorr signature - Exceptions: bitmap of length equal to the size of the witness group used by absent witnesses or refusing-to-sign witnesses (see 3.2.2 Operations).
Besides contributing to the signing, each witness can and should perform any readily feasible syntactic and semantic correctness checks on the leader’s proposed statements before signing off on them. They can / should probably publish logs of the statements they witness or simply make available a public mirror of everything that its tree roster has been asked to sign.
3.2 CoSi in Tor
3.2.1 Setup
CoSi relies on a list of decentralized cosigning witnesses, an optional role to be supported by a future version of the standard Tor relay software. The set of witness servers will initially be defined as the D.A. and a subset of the Fallback Directory Mirrors [2], namely the set of relays that are (a) deemed stable enough by the same criteria as used for selecting Fallback Directory Mirrors, (b) are running a recent enough version of the Tor relay software to support the CoSi witness server role, and (c) have not opted out of playing this CoSi witness role. That way the deployment cost is drastically reduced as software will already have the keys to verify a CoSi signature.
Another criteria for the tree construction includes the latency between witnesses. One can collect information about the communication latencies between the witnesses and construct a shortest-path spanning tree using this data in order to reduce the global latency of the system.
3.2.2 Operations
Once the tree is setup and each one knows about it (have the certificate), then the signature process happens for each consensus document. The CoSi protocol produces a collective signature in response to the initiation of the protocol by a leader. This signature is then included in the consensus document so clients don't have to request it from another party.
If a witness does not want to sign, it toggles the bit at its index in the bitmap. Its index is determined as the index in the list of witnesses from the CoSi certificate. The client will then see a "1" bit in the bitmap, and will subtract the corresponding public key of the witness from the aggregate public key. That way, the client is still able to verify the signature and it knows about which witnesses refused to sign off. The mechanism is similar for witnesses that went offline. The parent of an offline witness will set the bit in the bitmap of the failed witness.
One issue for discussion is who should initiate CoSi protocol rounds and at what times. For example, each of the 9 DAs (or whatever subset is online) could independently initiate CoSi rounds on each directory consensus event, producing up to nine separate, redundant collective signatures on each directory consensus. Alternatively, the common case might be for one of the 9 DAs to be the CoSi initiator at a given time, with a round-robin leader-change mechanism ensuring that another leader takes over if the prior one becomes unavailable.
A related issue for discussion is whether it could be problematic if there are two or more distinct collective signatures for a given directory consensus, and whether it is a problem if distinct subsets of 5 DAs might (perhaps accidentally) produce multiple slightly different, though valid and legitimately-signed, consensus documents at about the same time. In other words, does Tor directory consensus “need” strong consistency with a single serialized timeline, as Byzantine consensus protocols are intended to provide - or is weak consistency with occasional cases of multiple concurrent consensus documents and/or collective signatures acceptable?
3.2.3 Integration
First of all, integrating CoSi would *not* immediately affect the fundamental structure or function of the current DAs: there could still be 9 of them, of which any 5 can authorize the release of a new consensus document, as they do now. Secondly, CoSi would not necessarily change anything about how the 9 DAs decide on how to compute these directory consensus documents; e.g., it would not prevent the DAs from working together to block the inclusion of (or assignment of bandwidth-weight to) relays that might be perceived by the DAs as doing bad things. Finally, full backward compatibility with old Tor clients and relay software may be maintained by treating the new CoSi-generated collective signature as just an additional signature that gets attached to and distributed with consensus documents. It may be treated as a special “10th virtual DA” that does not help authorize decisions but just publicly witnesses the output of the regular 9 DAs. Old client and relay software can simply ignore that new collective signature, whereas new software might look for it and over time gradually increase the threshold number of witnesses it expects to see.
For incremental deployment purposes, it's reasonable to think that those Fallback Directory mirrors servers probably would not deply CoSi support all at the same time. Moreover, this set of witnesses might be a subset of the Fallback Directoy mirrors and some of them would not want to be cosigners. The hard-coded list of mirrors into the Tor client might need to be annotated with a “cosigner flag" for each such mirror, allowing some bootstrap relays to be cosigners and others not. The witness group is then simply the subset of fallback directory mirrors with the cosigner flag set.
3.3 Evolution of the CoSi set of witnesses
One obvious solution for the evolution of the CoSi set of winesses lies into the version-ing mechanism of Tor. A particular Tor client version would be associated with a particular cosigning group which consists of the D.A. and the Fallback Directory Mirrors whose keys are embedded into the source code of this Tor version with this "cosigner flag" set. A client will have the latest CoSi set keys when and only when its Tor client would be upgraded - just like the list of directory authorities.
3.4 Optional: Break-the-glass Emergency Directory Adjustments
In case of emergency, the delay caused by having to coordinate among 5 DAs in order to make anything happen (i.e. excluding a set of malicious nodes) can be problematic.
This section proposes a mechanism in which the CoSi witnesses can accept and witness not just “full consensus” documents (signed by 5 DAs), but can also accept “emergency adjustments”, which are highly-constrained deltas (diffs) to an existing full consensus document signed by a smaller threshold of DAs, e.g., 2 or even just 1. For example, the CoSi witness cosigning rules might require that an emergency directory-adjustment must: - be a diff against a “fresh”, recent full consensus document (perhaps *the* most recent one), - can make no modifications to the full consensus other than some pre-defined operations such as decreasing bandwidth weights assigned to relays, - cannot affect the directory-wide total bandwidth weight by more than X% (e.g., 1% or .1%).
These suggestions are just a few imaginable rules to get the idea across; the “right” rules would of course need much more discussion. This way, if one or two DAs discovers or even strongly suspects an attack of some kind, they can take emergency countermeasures against the attack and be able to roll them out to clients quickly without having to get a full 5 DAs out of bed - but because the delta-consensus is still witness-cosigned automatically by (perhaps) all the DAs and a number of additional trusted relays, we get the strong accountability provision that the use of such a “break-the-glass” emergency provision will immediately become known to the other DAs as soon as they do get out of bed.
Such a break-the-glass emergency adjustment mechanism could be designed, if desired, so that ordinary clients and relays cannot immediately tell the difference between a directory consensus produced via the normal threshold of 5 DAs and one that was produced as a delta via the emergency adjustment mechanism. Only the witness cosigners would necessarily need to know which collectively-signed directories were authorized via the full consensus procedure or via a break-the-glass adjustment. So if it’s important to keep it secret from the general public the precise reason for a particular directory update, that can be accommodated. Only the more-trusted group of witness cosigners (and obviously all the DAs themselves) would necessarily know the precise origin and administrative justification of a given directory update. With even fancier crypto, even the witnesses would not necessarily need to know, but that’s beyond the scope of this proposal and its desirability may be questionable at any rate.
4. Security implications
4.1 Cons
Since the structure is a tree, if any node fails, there must be some failover mechanisms to restore connectivity between the children of the failed node and the rest of the tree. Since the DA reach consensus every hour [1], with the help of the gossiping network, the availability problem is not an issue.
4.2 Benefits
Technically, it is quite easy to implement witness cosigning if the group of witnesses is small. If we want the group of witnesses to be large, however – and we do, to ensure that compromising transparency would require not just a few but hundreds or even thousands of witnesses to be colluding maliciously – then gathering hundreds or thousands of individual signatures could become painful and inefficient. Worse, every client would need to check all these signatures individually. The key technical contribution of our research is a distributed protocol that makes large, decentralized witness cosigning groups practical. This decentralized approach enables the security of the whole system to scale with the number of witnesses.
Not only does this system dramatically increase the cost of successfully deploying an attack on the DA (the attacker would have to corrupt a large majority of the witnesses), it also decreases the incentive to launch such an attack because the threshold of witnesses that are required to sign the document for the signature to be accepted can be locally set on each client.
4.3 Differences between CoSi and Certificate Transparency
Prior transparency mechanisms have two weaknesses. First they do not significantly increase the number of secret keys an attacker must control to compromise any client device, and client devices cannot even retroactively detect such compromise unless they can actively communicate with multiple well-known Internet servers. For example, even with Certificate Transparency, an attacker can forge an Extended Validation (EV) certificate for Chrome after compromising or coercing only three parties: one Certificate Authority (CA) and two log servers. Since many CAs and log servers are in US jurisdiction, such an attack is clearly within reach of the US government. If such an attack does occur, Certificate Transparency cannot detect it unless the victim device has a chance to communicate or gossip the fake certificate with other parties on the Internet – after it has already accepted and started using the fake digital certificate. In the case of Tor Transparency, the attack is harder because the attacker would have to compromise the three parties plus a majority of Directory Authorities but facing a state-level adversary the threat is still plausible. One way to increase the difficulty of the attack is to make sure the logs servers are scattered in different places of the world.
Second, the attacker can still evade transparency by controlling the client’s Internet access paths. For example, a compromised Internet service provider (ISP) or corporate Internet gateway can defeat retroactive transparency mechanisms by persistently blocking a victim device’s access to transparency servers elsewhere on the Internet. Even if the user’s device is mobile, a state intelligence service such as China’s “Great Firewall” could defeat retroactive transparency mechanisms by persistently blocking connections from a targeted victim’s devices to external transparency servers, in the same way that China already blocks connections to many websites and Tor relays.
Using CoSi requires the clients to have the list of public keys of all the witnesses embedded in the software, like certificate pinning. In order to reduce the size of this CoSi certificate, we embed the aggregated public key of all the witnesses and a hash representing the root of a Merkle tree containing the public key of all the witnesses. Using the certificate this way provides an universally-verifiable commitment to all the witnesses’ public keys, without the certificate actually containing them all.
5. Specifications
5.1 Protocol
We will describe quickly the protocol here; for a more detailed explanation, please refers to the academic paper [0]. The setup is as described in 3.2.1. The protocol in itself consists of four phases:
- Announcement: The leader broadcast down the consensus document to its children, which in turn also broadcast to their children,etc.
- Commitment: When the leaves of the CoSi tree get the consensus document, generate its random value v(i) and the corresponding commitment V(i) and sends V(i) up to its parent. If a leaf refuses to sign this consensus document, it does not create any commitment. Each intermediate node aggregate all the commitments of their children, add their own commitment (or nothing if it refuses to sign) and send the result up in the tree. The root gets the aggregated commitment V of all signing witnesses.
- Challenge: The root then compute the challenge c = H( m || V ), with m being the consensus document and H being a collision resistant hash function that returns a scalar, and distribute the challenge down the tree like in the Announcement phase.
- Response: Starting from the leaves, each witnesses compute its response r(i) = v(i) - c * x(i), where x(i) is the long term private key of the witness. If the witness refuses to sign, it simply set the n-th bit of the bitmap to "1", where n is the index of the witness in the "CoSi certificate" (the list of all individual public keys). Each intermediate nodes in the tree aggregate the responses and the bitmap of all its children, aggregate with its own response/bitmap and send that up in the tree. At the end of the protocol, the root gets the aggregated response r.
The signature is the tuple (c,r) and must be included in the consensus document. If no exceptions occurred (i.e. the bitmap contains all "0"s), the signature can be verified using the aggregate public key of all witnesses using standard Schnorr verification algorithm [3]. If an exception occurs, the client needs to lookup the indexes where the bitmap contains "1"s. The client then lookup the corresponding public keys (from the list of public keys of witnesses) and subtract each of them from the aggregate public key. The client can then use this reduced public key to verify the signature as usual.
5.2 Format
+ The "CoSi certificate" is a list of all witnesse's ed25519 public keys and the aggregate public key of all individual public keys. Please note that while the current implementation only uses ed25519, it is completely possible to use any other elliptic curve implementations.
+ A CoSi signature contains: - the challenge c, an ed25519 scalar - the response r, an ed25519 scalar - the bitmap of exceptions, whose length is equal to the number of witnesses.
+ The messages sent during the four following phases are as follow: - Announcement: consensus document - Commitment: an ed25519 curve point - Challenge: an ed25519 scalar - Response: an ed25519 scalar and the exception bitmap
6. Compatibility
7. Implementation
Implementation in Go is open source at: https://github.com/dedis/cothority
8. Performance
9. Acknowledgements
This proposals has received some valuable feedback from Bryan Ford, Linus Gasser, Ismail Khoffi, Philipp Jovanovic, and Ludovic Barman.
A. References
[0] http://arxiv.org/pdf/1503.08768v3.pdf [1] https://collector.torproject.org [2] https://trac.torproject.org/projects/tor/wiki/doc/FallbackDirectoryMirrors [3] https://en.wikipedia.org/wiki/Schnorr_signature
On 25 April 2016 at 07:32, Nicolas Gailly nicolas.gailly@epfl.ch wrote:
They can / should probably publish logs of the statements they witness or simply make available a public mirror of everything that its tree roster has been asked to sign.
This mirror can be 'unprotected' in the sense that you just stick the documents into a directory listed on the web. Any further protection (mirroring, append-only log) is unnecessary because the security of the scheme relies on a multitude of weakly-protected witnesses - not the security of an individual witness.
The mechanism is similar for witnesses that went offline. The parent of an offline witness will set the bit in the bitmap of the failed witness.
You mention this in Cons as well - it seems like the parents in the tree need to be more carefully selected than the leaf nodes or the tree can degrade heavily over time as they leave.
One issue for discussion is who should initiate CoSi protocol rounds and at what times. For example, each of the 9 DAs (or whatever subset
is online) could independently initiate CoSi rounds on each directory consensus event, producing up to nine separate, redundant collective signatures on each directory consensus. Alternatively, the common case might be for one of the 9 DAs to be the CoSi initiator at a given time, with a round-robin leader-change mechanism ensuring that another leader takes over if the prior one becomes unavailable.
Can't CoSi nodes ignore a request to initiate on a consensus document they're already in the process of signing?
A related issue for discussion is whether it could be problematic if
there are two or more distinct collective signatures for a given directory consensus, and whether it is a problem if distinct subsets of 5 DAs might (perhaps accidentally) produce multiple slightly different, though valid and legitimately-signed, consensus documents at about the same time. In other words, does Tor directory consensus “need” strong consistency with a single serialized timeline, as Byzantine consensus protocols are intended to provide - or is weak consistency with occasional cases of multiple concurrent consensus documents and/or collective signatures acceptable?
Such a situation should be a primary use case of CoSi. If an attacker submits a fraudulent consensus for signing it _should_ be signed and logged - that's the primary motivation. Having that signing process fail cause there's already a document in progress would be a poor design.
3.4 Optional: Break-the-glass Emergency Directory Adjustments ... With even fancier crypto, even the witnesses would not necessarily need to know, but that’s beyond the scope of this proposal and its desirability may be questionable at any rate.
I love me some fancy crypto, but I question the need for this feature at all, let alone more complex versions of it.
4.2 Benefits
Technically, it is quite easy to implement witness cosigning if the
group of witnesses is small. If we want the group of witnesses to be large, however – and we do, to ensure that compromising transparency would require not just a few but hundreds or even thousands of witnesses to be colluding maliciously – then gathering hundreds or thousands of individual signatures could become painful and inefficient. Worse, every client would need to check all these signatures individually.
That seems like a very painful cost to pay - I would expect this would significantly hurt the performance of Tor in constrained spaces: mobile phones, IOT devices. I had hoped signature checking was a one-shot, not for each individual key.
The key technical contribution of our research is a distributed protocol that makes large, decentralized witness cosigning groups practical. This decentralized approach enables the security of the whole system to scale with the number of witnesses.
Additionally, you (or maybe not you, but your protocol) results in a _single_ signature, not N signatures. So there's a size benefit compared to a standard 'N of M' scheme where each element is it's own signature.
Not only does this system dramatically increase the cost of successfully deploying an attack on the DA (the attacker would have to corrupt a
large majority of the witnesses), it also decreases the incentive to launch such an attack because the threshold of witnesses that are required to sign the document for the signature to be accepted can be locally set on each client.
This does; however, give a pretty straightforward fingerprinting attack.
Using CoSi requires the clients to have the list of public keys of
all the witnesses embedded in the software, like certificate pinning. In order to reduce the size of this CoSi certificate, we embed the aggregated public key of all the witnesses and a hash representing the root of a Merkle tree containing the public key of all the witnesses. Using the certificate this way provides an universally-verifiable commitment to all the witnesses’ public keys, without the certificate actually containing them all.
But since the client needs the public keys, it will download each of them via some unspecified mechanism?
==== General Comments
My biggest comment/concern about this is determining the set of signers, how you update that set, and how you handle the long-tail of users with old sets of signers. You mention this in 3.3 - that signers can be baked into a particular version of Tor. 0.2.4.23 was released in Aug, 2014 and the 0.2.4 branch has ~1500 relays in the network. As you ay - you can go with a smaller number of handpicked ones to increase reliability or a larger number of less reliable ones, with the latter being preferable.
Where do the signers come from? Presumably we'd make up some criteria and say 'these relays are signers' and then fix them as signers and update that list every major version or something... but what's turnover for relays on a 3-year timespan? I think suggesting a criteria for choosing signers and performing the turnover analysis will be very important to see if this approach is feasible.
Aside from that - for folks who don't know I'm working on Certificate Transparency Gossip, which some people see as a 'competitor' to CoSi in the CT space. I agree CoSi reduces the need for Gossip in CT; but I would be happy to see CoSi developed and deployed for CT logs. I just don't think it will be for many of the trusted logs (e.g. Google's) for operational reasons - and I want to ensure the correct behavior of these logs in as privacy-preserving way as I can. So I keep working on that. Gossip for Tor would be radically different from Gossip in the web ecosystem (to the point where it may not work). CoSi may be a better fit.
-tom
On 30 Apr 2016, at 01:13, Tom Ritter tom@ritter.vg wrote:
3.4 Optional: Break-the-glass Emergency Directory Adjustments ... With even fancier crypto, even the witnesses would not necessarily need to know, but that’s beyond the scope of this proposal and its desirability may be questionable at any rate.
I love me some fancy crypto, but I question the need for this feature at all, let alone more complex versions of it.
This looks like a "golden key" from a distance, and, if there's a bug in the implementation, it could well become one.
I'd want to make sure we really needed this feature before implementing it.
Tim
Tim Wilson-Brown (teor)
teor2345 at gmail dot com PGP 968F094B ricochet:ekmygaiu4rzgsk6n
On 04/29/2016 05:13 PM, Tom Ritter wrote:
The mechanism is similar for witnesses that went offline. The parent of an offline witness will set the bit in the bitmap of the failed witness.
You mention this in Cons as well - it seems like the parents in the tree need to be more carefully selected than the leaf nodes or the tree can degrade heavily over time as they leave.
First of all, you have to remember that the Tree layout is not important regarding the signature generation, but is merely here as an optimization. It enables us to restart CoSi with a new Tree layout with a better Leader. Secondly, in the context of Tor + CoSi, the leader would be selected as one of the D.A. (in RR fashion or fixed, see below). From my limited point of view, the D.A.s are considered to be longterm and highly available servers.
One issue for discussion is who should initiate CoSi protocol rounds and at what times. For example, each of the 9 DAs (or whatever subset
is online) could independently initiate CoSi rounds on each directory consensus event, producing up to nine separate, redundant collective signatures on each directory consensus. Alternatively, the common case might be for one of the 9 DAs to be the CoSi initiator at a given time, with a round-robin leader-change mechanism ensuring that another leader takes over if the prior one becomes unavailable.
Can't CoSi nodes ignore a request to initiate on a consensus document they're already in the process of signing?
Sure. But "ignore a request" would mean that every witnesses refuse to sign? In that case, the leader must have a way to determine if the signature has not been issued because the consensus document is already in the process of being signed or because the consensus document looks suspicious. Here both approaches will return valid signature as long as the CoSi system has been given a valid consensus document.
A related issue for discussion is whether it could be problematic if
there are two or more distinct collective signatures for a given directory consensus, and whether it is a problem if distinct subsets of 5 DAs might (perhaps accidentally) produce multiple slightly different, though valid and legitimately-signed, consensus documents at about the same time. In other words, does Tor directory consensus “need” strong consistency with a single serialized timeline, as Byzantine consensus protocols are intended to provide - or is weak consistency with occasional cases of multiple concurrent consensus documents and/or collective signatures acceptable?
Such a situation should be a primary use case of CoSi. If an attacker submits a fraudulent consensus for signing it _should_ be signed and logged - that's the primary motivation. Having that signing process fail cause there's already a document in progress would be a poor design.
The question is related to what was discussed during the Tor dev meeting about the Tor Transparency idea (https://lists.torproject.org/pipermail/tor-dev/2014-July/007092.html). There's been some concerns regarding having different signatures for the same content (https://trac.torproject.org/projects/tor/wiki/org/meetings/2016WinterDevMeet...).
In any case, the specifics rules for signing (see the above paragraph.) should be tailored to the Tor needs and should be discussed more in-depth with Tor experts (I'm not one;);
3.4 Optional: Break-the-glass Emergency Directory Adjustments I love me some fancy crypto, but I question the need for this feature at all, let alone more complex versions of it.
Tim Wilson-Brown:
This looks like a "golden key" from a distance, and, if there's a bug in the implementation, it could well become one.
I'd want to make sure we really needed this feature before implementing it.
One issue I've been told during the Tor dev is the *reactivity* of D.A. operators when a (or multiple) relay are detected as being malicious. This feature is a proposition to alleviate the need to have at least 5 D.A. ready to sign. I've also been told that Roger was interested in the idea at but it would be great if some D.A. operators could say more about this indeed.
About the "golden key" perspective, the rules determine exactly what this emergency process can do, and these rules will be publicly enforced by every witnesses. Moreover, this emergency process will still be publicly logged so any attack against a few D.A.s that uses this emergency process can easily be detected. But I agree these rules are delicate and as written in the proposal, they need much more discussion (offlist
4.2 Benefits
Technically, it is quite easy to implement witness cosigning if the
group of witnesses is small. If we want the group of witnesses to be large, however – and we do, to ensure that compromising transparency would require not just a few but hundreds or even thousands of witnesses to be colluding maliciously – then gathering hundreds or thousands of individual signatures could become painful and inefficient. Worse, every client would need to check all these signatures individually.
That seems like a very painful cost to pay - I would expect this would significantly hurt the performance of Tor in constrained spaces: mobile phones, IOT devices. I had hoped signature checking was a one-shot, not for each individual key.
It was a comparison regarding other multi-signature approaches ;) A CoSi signature can be verified in "one-shot" against the aggregate public key of all witnesses. Moreover, using the Fallback Directory mirrors https://trac.torproject.org/projects/tor/wiki/doc/FallbackDirectoryMirrors) as witnesses, the Tor clients won't need any additional keys as they would already be included in the source code.
Additionally, you (or maybe not you, but your protocol) results in a _single_ signature, not N signatures. So there's a size benefit compared to a standard 'N of M' scheme where each element is it's own signature.
Exactly.
it also decreases the incentive to launch such an attack because the threshold of witnesses that are required to sign the document for the signature to be accepted can be locally set on each client.
This does; however, give a pretty straightforward fingerprinting attack.
I'm afraid I don't see what you mean here. Are you talking about the "locally set" threshold of witnesses that must have participated in the CoSi signature in order to be considered valid ? -> Yes: If an attacker has successfully fingerprinted a Tor client by knowing its "threshold", that means the attacker already has corrupted the *majority of the D.A.s* (because the consensus document still need to be signed as usual by a majority of D.A.s), AND at least *threshold* witnesses. -> No: Could you elaborate then please ? :)
But since the client needs the public keys, it will download each of them via some unspecified mechanism?
The second version of the proposal emphasizes more on this subject. Specifically, the CoSi set of witnesses could be defined as a subset of the Fallback directory mirrors (https://trac.torproject.org/projects/tor/wiki/doc/FallbackDirectoryMirrors), so that way a Tor client can already verify CoSi signatures using the embedded keys.
==== General Comments
My biggest comment/concern about this is determining the set of signers, how you update that set, and how you handle the long-tail of users with old sets of signers. You mention this in 3.3 - that signers can be baked into a particular version of Tor. 0.2.4.23 was released in Aug, 2014 and the 0.2.4 branch has ~1500 relays in the network. As you ay - you can go with a smaller number of handpicked ones to increase reliability or a larger number of less reliable ones, with the latter being preferable.
Where do the signers come from? Presumably we'd make up some criteria and say 'these relays are signers' and then fix them as signers and update that list every major version or something... but what's turnover for relays on a 3-year timespan? I think suggesting a criteria for choosing signers and performing the turnover analysis will be very important to see if this approach is feasible.
That's why we started emphasizing on using the Fallback Directory mirrors because they already have been selected using some specific criterias that the Tor people already chose. If a *very old* Tor client get back online, and *if* the Fallback Directory mirrors list is a new disjoint set from the *very old* set, the new CoSi witnesses list also has changed radically. Thus, there's a high probability that the CoSi signature will be considered invalid, that does not mean the client can't use Tor (it could be a flag in the torrc "AcceptInvalidCosiSig"); but the client will see this warning and that he should definitely update to a more recent version for a better security. See section 3.2.3 for more on this.
Aside from that - for folks who don't know I'm working on Certificate Transparency Gossip, which some people see as a 'competitor' to CoSi in the CT space. I agree CoSi reduces the need for Gossip in CT; but I would be happy to see CoSi developed and deployed for CT logs. I just
We already designed last year a prototype using CoSi with CT. We'd be happy to talk more with you about that, so let's keep in touch :)
don't think it will be for many of the trusted logs (e.g. Google's) for operational reasons - and I want to ensure the correct behavior of these logs in as privacy-preserving way as I can. So I keep working on that. Gossip for Tor would be radically different from Gossip in the web ecosystem (to the point where it may not work). CoSi may be a better fit.
-tom
I'm always happy to answer any more concerns / feedback you might have :)
Thanks a lot,
Nicolas
On 1 May 2016, at 00:56, Nicolas Gailly nicolas.gailly@epfl.ch wrote:
4.2 Benefits
Technically, it is quite easy to implement witness cosigning if the group of witnesses is small. If we want the group of witnesses to be large, however – and we do, to ensure that compromising transparency would require not just a few but hundreds or even thousands of witnesses to be colluding maliciously – then gathering hundreds or thousands of individual signatures could become painful and inefficient. Worse, every client would need to check all these signatures individually.
That seems like a very painful cost to pay - I would expect this would significantly hurt the performance of Tor in constrained spaces: mobile phones, IOT devices. I had hoped signature checking was a one-shot, not for each individual key.
It was a comparison regarding other multi-signature approaches ;) A CoSi signature can be verified in "one-shot" against the aggregate public key of all witnesses. Moreover, using the Fallback Directory mirrors https://trac.torproject.org/projects/tor/wiki/doc/FallbackDirectoryMirrors) as witnesses, the Tor clients won't need any additional keys as they would already be included in the source code.
Tor doesn't actually hard-code any keys in its source code. It only includes hashed key fingerprints for authorities and fallback directories.
When Tor bootstraps: 1. Tor contacts a fallback directory, it compares the fingerprint of the public key the fallback directory sends to the hard-coded fingerprint, 2. Tor downloads a consensus, and downloads the authority keys required to validate the signatures on that consensus, 3. once Tor has validated the consensus, it uses the relays in the consensus.
In order to obtain the fallback directory mirror keys, you would have to add another step: 2b. Tor downloads the descriptors of each of the fallback directory mirrors
Descriptors are about 1.5KB on average, and connecting to several different fallbacks to download these descriptors could add significantly to bootstrap time.
It's worth noting that there's a cryptographic issue with using microdescriptors, which is that microdescriptors aren't signed by the private keys of each relay. Instead, they're authenticated by being included in a valid consensus. But the consensus that we have hasn't been validated.
So this means that CoSi needs to download the full descriptors, which are significantly larger than microdescriptors.
It's worth documenting this in the proposal, as it would be very easy to implement an insecure variant of CoSi that used microdescriptors.
It's also worth noting that the requirement is that Tor download a majority of authority keys, not all of them. It's important that we also specify the number of fallback keys that need to be downloaded for the CoSi scheme.
Additionally, you (or maybe not you, but your protocol) results in a _single_ signature, not N signatures. So there's a size benefit compared to a standard 'N of M' scheme where each element is it's own signature.
Exactly.
it also decreases the incentive to launch such an attack because the threshold of witnesses that are required to sign the document for the signature to be accepted can be locally set on each client.
This does; however, give a pretty straightforward fingerprinting attack.
I'm afraid I don't see what you mean here. Are you talking about the "locally set" threshold of witnesses that must have participated in the CoSi signature in order to be considered valid ? -> Yes: If an attacker has successfully fingerprinted a Tor client by knowing its "threshold", that means the attacker already has corrupted the *majority of the D.A.s* (because the consensus document still need to be signed as usual by a majority of D.A.s), AND at least *threshold* witnesses. -> No: Could you elaborate then please ? :)
But since the client needs the public keys, it will download each of them via some unspecified mechanism?
The second version of the proposal emphasizes more on this subject. Specifically, the CoSi set of witnesses could be defined as a subset of the Fallback directory mirrors (https://trac.torproject.org/projects/tor/wiki/doc/FallbackDirectoryMirrors), so that way a Tor client can already verify CoSi signatures using the embedded keys.
As I describe above, the proposal needs to specify the fallback key download mechanism, and justify that it is secure.
==== General Comments
My biggest comment/concern about this is determining the set of signers, how you update that set, and how you handle the long-tail of users with old sets of signers. You mention this in 3.3 - that signers can be baked into a particular version of Tor. 0.2.4.23 was released in Aug, 2014 and the 0.2.4 branch has ~1500 relays in the network. As you ay - you can go with a smaller number of handpicked ones to increase reliability or a larger number of less reliable ones, with the latter being preferable.
This persistence of old tor relay versions is why we asked that fallback directory mirrors be up for the next 2 years.
However, it's worth noting that older versions of Tor were excluded from the network due to Heartbleed (April 2014). It's possible that, in future, old tor relay versions may stay on the network for longer than 2 years.
Where do the signers come from? Presumably we'd make up some criteria and say 'these relays are signers' and then fix them as signers and update that list every major version or something... but what's turnover for relays on a 3-year timespan? I think suggesting a criteria for choosing signers and performing the turnover analysis will be very important to see if this approach is feasible.
That's why we started emphasizing on using the Fallback Directory mirrors because they already have been selected using some specific criterias that the Tor people already chose. If a *very old* Tor client get back online, and *if* the Fallback Directory mirrors list is a new disjoint set from the *very old* set, the new CoSi witnesses list also has changed radically. Thus, there's a high probability that the CoSi signature will be considered invalid, that does not mean the client can't use Tor (it could be a flag in the torrc "AcceptInvalidCosiSig"); but the client will see this warning and that he should definitely update to a more recent version for a better security. See section 3.2.3 for more on this.
I'm not convinced that the set of fallback directory mirrors is necessarily suited for CoSi.
Fallback directory mirror selection uses the following criteria: * relay operators opting-in to be a fallback directory mirror, * being on the same key, addresses, and ports, ideally for the life of the release, nominally the next 2 years, * having good uptime, calculated as a decaying weighted average, * having the Guard, Running, and V2Dir flags almost all the time, calculated as a decaying weighted average, * having at least 3MB/s bandwidth, (one hundred times the expected additional load on a fallback directory), * being able to serve the consensus within 15 seconds, * not having more than one fallback with the same IP, family, or operator (contact info). After these criteria are applied, we choose the 100 fallback candidates with the highest bandwidth.
Some of these criteria just don't seem that relevant to CoSi.
Others, like the stability criteria, are directly relevant - because they tell you how many fallbacks we expect to be up at any one time (95% when the 0.2.8 list was created). We expect the number of fallbacks that are up to decrease over time, but, since this is the first release with fallbacks, we don't know how rapidly their numbers will fall.
The fallback list has been selected so that even if only a small number of fallbacks are up, they can handle the extra load. Even if they are all down, tor will still bootstrap from the authorities, with only a few seconds' delay.
A reliable, practical design like this is a crucial part of the CoSi proposal. It determines how many witnesses should be required out of the total number, and what criteria should be used to select the list of witnesses.
It's also worth considering how much effort it would take to contact relay operators about CoSi. I only asked operators about becoming a fallback directory mirror, and not any other potential uses of the list of fallback directories. It took me several weeks just to send out these emails, and collate responses.
Operators will likely want to know about the version of Tor they'll need to run, and any additional bandwidth or CPU requirements. Knowing the additional bandwidth and CPU load would be useful as part of the proposal. If the fallbacks have to serve each others' descriptors in order for clients to obtain the fallback keys, this extra bandwidth needs to be accounted for.
If a descriptor is 1.5KB, and you need to download 100 of them, that's an extra 1.5MB at bootstrap time. Microdescriptor consensuses are 1.3MB. So that would mean increasing the additional bandwidth requirements for fallback directory mirrors from 20KB/s to 50KB/s. This excludes the bandwidth costs of the CoSi scheme itself, which are hopefully much, much smaller.
(These are uncompressed figures, compression might also be a factor, but it doesn't seem to differ that much between microdescriptor consensuses and descriptors, it's about 45% for both.)
Please feel free to use this email or these ideas in any updates to the proposal.
Tim
Tim Wilson-Brown (teor)
teor2345 at gmail dot com PGP 968F094B ricochet:ekmygaiu4rzgsk6n
On 30 April 2016 at 09:56, Nicolas Gailly nicolas.gailly@epfl.ch wrote:
On 04/29/2016 05:13 PM, Tom Ritter wrote:
The mechanism is similar for witnesses that went offline. The parent of an offline witness will set the bit in the bitmap of the failed witness.
You mention this in Cons as well - it seems like the parents in the tree need to be more carefully selected than the leaf nodes or the tree can degrade heavily over time as they leave.
First of all, you have to remember that the Tree layout is not important regarding the signature generation, but is merely here as an optimization. It enables us to restart CoSi with a new Tree layout with a better Leader. Secondly, in the context of Tor + CoSi, the leader would be selected as one of the D.A. (in RR fashion or fixed, see below). From my limited point of view, the D.A.s are considered to be longterm and highly available servers.
Yup, DAs are longterm and available servers. Now it's my understanding that the parent nodes were needed to communicate to the children, and this was a part of the optimized protocol you had developed. If a parent goes offline, permanently, one needs to update the tree and reorganize it. This won't cause a change to signature _verification_ but will cause a change to signature _generation_ - nodes will need to be told 'your new parent is X', and they need to be told that outside the normal tree-based communication. So - doable, and doesn't affect clients, but does require some engineering.
One issue for discussion is who should initiate CoSi protocol rounds and at what times. For example, each of the 9 DAs (or whatever subset
is online) could independently initiate CoSi rounds on each directory consensus event, producing up to nine separate, redundant collective signatures on each directory consensus. Alternatively, the common case might be for one of the 9 DAs to be the CoSi initiator at a given time, with a round-robin leader-change mechanism ensuring that another leader takes over if the prior one becomes unavailable.
Can't CoSi nodes ignore a request to initiate on a consensus document they're already in the process of signing?
Sure. But "ignore a request" would mean that every witnesses refuse to sign? In that case, the leader must have a way to determine if the signature has not been issued because the consensus document is already in the process of being signed or because the consensus document looks suspicious. Here both approaches will return valid signature as long as the CoSi system has been given a valid consensus document.
For my point of view - a consensus should never not-be-signed because it's suspicious. CoSi nodes aren't the determination of what's suspicious. That's some other process entirely separate to this.
I see it as more simple: I get a request to sign document A (identified by some hash). Am I already signing it, or signed it in the past? If so, ignore it. If not, sign it. I would need support for running many instances of the multi-step signing protocol at the same time.
A related issue for discussion is whether it could be problematic if
there are two or more distinct collective signatures for a given directory consensus, and whether it is a problem if distinct subsets of 5 DAs might (perhaps accidentally) produce multiple slightly different, though valid and legitimately-signed, consensus documents at about the same time. In other words, does Tor directory consensus “need” strong consistency with a single serialized timeline, as Byzantine consensus protocols are intended to provide - or is weak consistency with occasional cases of multiple concurrent consensus documents and/or collective signatures acceptable?
Such a situation should be a primary use case of CoSi. If an attacker submits a fraudulent consensus for signing it _should_ be signed and logged - that's the primary motivation. Having that signing process fail cause there's already a document in progress would be a poor design.
The question is related to what was discussed during the Tor dev meeting about the Tor Transparency idea (https://lists.torproject.org/pipermail/tor-dev/2014-July/007092.html). There's been some concerns regarding having different signatures for the same content (https://trac.torproject.org/projects/tor/wiki/org/meetings/2016WinterDevMeet...).
In any case, the specifics rules for signing (see the above paragraph.) should be tailored to the Tor needs and should be discussed more in-depth with Tor experts (I'm not one;);
Hmmmmm. I *think* I understand this concern. AIUI the issue is Consensus A and B - which differ only by Authority N who signed B but not A due to network timing or clock skew. Both valid consensuses.
It seems to me we would want to log them both. We want to track all signatures made by a DA key. But perhaps I haven't thought closely enough about this.
4.2 Benefits
Technically, it is quite easy to implement witness cosigning if the
group of witnesses is small. If we want the group of witnesses to be large, however – and we do, to ensure that compromising transparency would require not just a few but hundreds or even thousands of witnesses to be colluding maliciously – then gathering hundreds or thousands of individual signatures could become painful and inefficient. Worse, every client would need to check all these signatures individually.
That seems like a very painful cost to pay - I would expect this would significantly hurt the performance of Tor in constrained spaces: mobile phones, IOT devices. I had hoped signature checking was a one-shot, not for each individual key.
It was a comparison regarding other multi-signature approaches ;) A CoSi signature can be verified in "one-shot" against the aggregate public key of all witnesses. Moreover, using the Fallback Directory mirrors https://trac.torproject.org/projects/tor/wiki/doc/FallbackDirectoryMirrors) as witnesses, the Tor clients won't need any additional keys as they would already be included in the source code.
Ah okay, so it is just one signature verification? That's great if so.
Additionally, you (or maybe not you, but your protocol) results in a _single_ signature, not N signatures. So there's a size benefit compared to a standard 'N of M' scheme where each element is it's own signature.
Exactly.
it also decreases the incentive to launch such an attack because the threshold of witnesses that are required to sign the document for the signature to be accepted can be locally set on each client.
This does; however, give a pretty straightforward fingerprinting attack.
I'm afraid I don't see what you mean here. Are you talking about the "locally set" threshold of witnesses that must have participated in the CoSi signature in order to be considered valid ? -> Yes: If an attacker has successfully fingerprinted a Tor client by knowing its "threshold", that means the attacker already has corrupted the *majority of the D.A.s* (because the consensus document still need to be signed as usual by a majority of D.A.s), AND at least *threshold* witnesses. -> No: Could you elaborate then please ? :)
Yes. Hardly an easy attack, but if Alice has set her threshold to N+20 signers from the normal N, I can feed a client consensus documents with N+19 and N+20 witnesses and if the first doesn't stick and the second does - I've a good idea it's Alice (or someone else who has set their threshold to N+20).
Teor's comments about Fallback Dirs are better than ones I could write. =)
-tom
A quick response:
it also decreases the incentive to launch such an attack because the threshold of witnesses that are required to sign the document for the signature to be accepted can be locally set on each client.
This does; however, give a pretty straightforward fingerprinting attack.
I'm afraid I don't see what you mean here. Are you talking about the "locally set" threshold of witnesses that must have participated in the CoSi signature in order to be considered valid ? -> Yes: If an attacker has successfully fingerprinted a Tor client by knowing its "threshold", that means the attacker already has corrupted the *majority of the D.A.s* (because the consensus document still need to be signed as usual by a majority of D.A.s), AND at least *threshold* witnesses. -> No: Could you elaborate then please ? :)
Yes. Hardly an easy attack, but if Alice has set her threshold to N+20 signers from the normal N, I can feed a client consensus documents with N+19 and N+20 witnesses and if the first doesn't stick and the second does - I've a good idea it's Alice (or someone else who has set their threshold to N+20).
My 2 cents about that ;)
1 - I think a fingerprinting attack over a range of ~100 discrete values (there would be around ~100 witnesses) will be very inaccurate regarding the size of Tor users. 2 - If an attacker already has the possibility of doing this, that means he controls already a majority of the D.A. plus some CoSi witnesses. -> The attacker can only do this attack for as many witnesses it controls. If Alice has set her threshold to 80, the attacker must control at least 80 witnesses (which already a very very bad situation!). The default threshold should be high (> 80, > 90) to drastically increase the cost of such an attack. -> I'm also thinking there could be way much more damaging attacks that the attacker can do in a situation like this (consensus containing a majority of its relays etc).
Teor's comments about Fallback Dirs are better than ones I could write. =)
Thanks a *lot* (both of you) for your comments, they've been very fruitful! I'm already working on the next version in the few free time I have.
More feedback always welcome ;)
Nicolas
-tom _______________________________________________ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev