Hello,
#ChromeDevSummit this month hosted a talk on the privacy budget (https://youtu.be/0STgfjSA6T8) and in the talk they mentioned that they are currently running a large-scale study to identify which potentially identifying APIs sites are using and how much they are using them. This data might be really useful for us to improve the fingerprinting protections in the Tor Browser so we might want to keep an eye on the developments here.
Also, I think I am finally beginning to see the brilliance in the privacy budget proposal (https://github.com/bslassey/privacy-budget).
To the best of my knowledge, current mainstream browsers use some combination of hardcoded lists of companies (like the disconnect list used by Firefox's Enhanced Tracking Protection), hardcoded lists of scripts (like uBlock Origin), and heuristics (from the relatively simple Privacy Badger to the more complicated Intelligent Tracking Prevention in Safari). These approaches are oddly reminiscent of old-school signature-based AV and they don't really attack the core problem but try to identify bad scripts. For starters, as AV experience has taught us, these approaches might work for mass attacks but don't work well against targeted attacks. So these approaches, while they allow for mainstream browser-level usability, don't seem compatible with the threat model of the Tor Browser.
This is where IMO the privacy budget shines. It goes straight to the problem: APIs that reveal potentially identifying information and keeps a ledger of the calls. This is more conservative than the above approaches and seems closer to the Tor Browser's current approach (which is to block or spoof the outputs of potentially identifying APIs). Of course, this comes with the same false positive issue that currently affects the Tor Browser, but hey we can't have everything. Moreover, this seems to lend itself nicely to a good affordance where the browser can easily expose to the user how much data they are leaking to the webpage and if the webpage exceeds the set budget and requests more API calls, how much more data they would expose by allowing these calls.
Right now, the Tor Browser's fingerprinting protection is all-or-nothing and it might soon become per-origin all-or-nothing (see Mozilla Bug 1450398), privacy budget might be a nice next step where a user can allow access to some fingerprinting surface without potentially being completely identifiable.
What do you folks think?
Best, Sanketh