Below is my first go at a list of criteria to consider when evaluating pluggable transports for readiness of deployment to users. The goal isn't to say that every transport has to "pass" each question -- rather, I'm hoping to fund a researcher-developer at some point soon to polish some of the research prototypes so we can include them in the PT TBB, and I wanted to think about guidelines for how they should prioritize which PTs to work on.
What points is it missing? What points does it mis-explain?
See also George's "How the pluggable transports factory works" mail: https://lists.torproject.org/pipermail/tor-dev/2013-August/005231.html
Thanks! --Roger
----
Section one, how reviewed / reviewable is it:
1) Is the software published, and is it entirely free and open source? Some designs call for non-free (and non-distributable) components like Skype, a copy of Windows in a VM image, etc.
2) Is there a published design document, including a threat model? Is there a specification? How testable are its security / unblockability claims? We should also consider how much peer review the design has received, and whether the project is getting continued attention by its inventors.
3) What is its deployment history? What kind of users did it have and how many? How much publicity? Did it get blocked?
Section two, evaluation of design:
4) How difficult or expensive will it be to block the design (by protocol, by endpoints, etc)? For example, what services or protocols does it rely on being usable or reachable? Expense could include actual cost or could include collateral damage. Another way to measure might be the fraction of censoring countries where the technique is expected to work.
5) What anonymity improvements does the design provide, if any? While many pluggable transports focus only on reachability and leave anonymity properties to Tor, some research designs use the pluggable transport interface to experiment with improved traffic analysis resistance, such as by adding padding to defend better against website fingerprinting attacks.
6) What's the bandwidth overhead? Some transports like Obfsproxy don't inflate communication size, while others like Stegotorus wrap their communications in a more innocuous protocol at a cost of sending and receiving more bytes. Designs with higher bandwidth overhead can provide better blocking-resistance, but are also less suited for low-bandwidth environments.
7) Scanning-resistance: how does the design fare against active probing attacks, like China's follow-up connections that test for vanilla Tor traffic? ("How the Great Firewall of China is Blocking Tor", Philipp Winter, FOCI 2012).
Section three, evaluation of implementation:
8) Does the implementation use Tor's Pluggable Transport (PT) Application Programming Interface (API) already? Tor has a standard recommended approach so transport modules can be invoked and managed by the Tor process. The PT API also allows Tor to automatically publish capabilities of the transport, collect user and usage statistics from the transport, and so on.
9) Is the implementation cross-platform (Windows, OS X, Linux at least)? How about support for mobile platforms?
10) How easy is the build process, and how easy is deployment and scaling? For example, what software libraries does it require, how likely are we to get enough bridge-side addresses, etc?
11) How is the code from a security and maintainability perspective? Are there unit tests, integration tests, etc? While the underlying Tor channel provides security properties like encryption and authentication, pluggable transports can still introduce new security risks if designed or built improperly.