On Tue, Mar 01, 2022 at 07:06:56PM -0500, Ian Goldberg via tor-project wrote:
Do all snowflake users have to get pointed at the same machine/IP address? That is, would a bunch of people running bridges each contributing a fraction of 100 TB/month (~= 300 Mbps) be helpful?
That's a good question. The answer is yes, for technical reasons it is currently a requirement that go to a centralized-ish bridge. There are two main reasons, one having to do with the "backend" Tor interface and one having to do with the "frontend" Snowflake interface. At the moment, the Tor reason is the more constraining. (For full context, see https://forum.torproject.net/t/1483.) I would like to work toward the situation you propose, but at the moment it's not possible
For reference, the pipeline looks like this: Snowflake proxies connect to a pluggable transport server, called snowflake-server, using WebSocket. snowflake-server is an HTTPS server that receives the WebSocket traffic, decodes it, and forwards it to tor. snowflake-server and tor do not need to run on the same host, though they do currently. Running them separately is an option for scaling, though ideally they are still located near each other (network-wise), which is not an option on the current hosting.
The "backend" issue with Tor is that Tor clients expect a certain bridge fingerprint. In this case it is 2B280B23E1107BB62ABFC40DDCC8824814F80A72, which is hard-coded in Tor Browser configuration files. So even if there were a pool of multiple bridges, they would all need to share the same identity keys; there's a trust boundary there that's not easy to distribute. (In fact, the load balancing configuration works much like that already: many instances that use the same identity keys. We could run those multiple instances on separate hosts—and that's the next planned step for scaling after upgrading the server—but they can't easily all be administered by different people.)
There are ways to deal with the fingerprint issue, which are considered in https://bugs.torproject.org/tpo/anti-censorship/pluggable-transports/snowfla.... One way would be for the Tor client not to verify its bridge fingerprint, but as I understand it that enables certain traffic tagging attacks. Another way would be for the client to have an allowlist of permissible fingerprints rather than a single value, but I imagine that would not be a quick change in core tor. shelikhoo on the anti-censorship team has been thinking about this issue, and may have other ideas.
The "frontend" issue with Snowflake is that snowflake-server does more than decapsulate WebSocket connections: it also manages all the Turbo Tunnel session state. snowflake-server effectively has a "TCB" for each ongoing session in its memory, so that when a temporary proxy WebSocket connection dies, the client can reconnect through a different proxy without losing its state. snowflake-server is multithreaded and can expand to use any available CPU capacity, but it would require significant rearchitecting to share or somehow partition the Turbo Tunnel state across multiple hosts that do not share memory. At the moment, snowflake-server uses about half the CPU and RAM on the server, and Tor-related process use the other half.
In summary: it's easy to decouple the frontend pluggable transport and the backend Tor, but the backend Tor instances still need to share an identity key, and the frontend snowflake-server cannot currently scale beyond a single host. Removing either of these limitations is impossible, but it would be a non-trivial project.