Hi George,
I posted an initial draft of the proposal here: https://lists.torproject.org/pipermail/tor-dev/2014-November/007863.html Any feedback would be awesome.
OK, I’ll have a chance to look at this in the next few days.
Specifically, I would be interested in undertanding the concept of additive noise a bit better. As you can see the proposal draft is still using multiplicative noise, and if you think that additive is better we should change it. Unfortunately, I couldn't find any good resources on the Internet explaining the difference between additive and multiplicative noise. Could you expand a bit on what you said above? Or link to a paper that explains more? Or link to some other system that is doing additive noise (or even better its implementation)?
The technical argument for differential privacy is explained in http://research.microsoft.com/en-us/projects/databaseprivacy/dwork.pdf. The definition appears in Def. 2, the Laplace mechanism is given in Eq. 3 of Sec. 5, and Thm. 4 shows why that mechanism achieves differential privacy.
But that stuff is pretty dry. The basic idea is that you’re trying to the contribution of any one sensitive input (e.g. a single user’s data or a single component of a single user’s data). The noise that you need to cover that doesn’t scale with the number of other users, and so you use additive noise.
Hope that helps, Aaron