You're right on the details part, this seems more like a research idea at this stage.
but my immediate reaction was that window size + fonts + canvas - and maybe just two or even one of those - are enough to uniquely identify a user, so how do you handle that with the budget? :)
That is an excellent question and, if I understand properly (cf. [1]), one of the questions they want to answer with their large scale study.
[1] https://github.com/bslassey/privacy-budget#measure-information-exposed-by-ea...
-----Original Message----- From: tbb-dev tbb-dev-bounces@lists.torproject.org On Behalf Of Tom Ritter Sent: Monday, January 04, 2021 11:02 AM To: discussion regarding Tor Browser Bundle development tbb-dev@lists.torproject.org Subject: Re: [tbb-dev] Chrome's Proposed Privacy Budget?
I haven't watched the video; but my immediate reaction was that window size + fonts + canvas - and maybe just two or even one of those - are enough to uniquely identify a user, so how do you handle that with the budget? :) Basically I'm very skeptical of it, but I don't think there's enough details (AFAIK) right now to say conclusively that it isn't going to work (or that it might.)
-tom
On Tue, 29 Dec 2020 at 01:12, Sanketh Menda sgmenda@uwaterloo.ca wrote:
Hello,
#ChromeDevSummit this month hosted a talk on the privacy budget (https://youtu.be/0STgfjSA6T8) and in the talk they mentioned that they are currently running a large-scale study to identify which potentially identifying APIs sites are using and how much they are using them. This data might be really useful for us to improve the fingerprinting protections in the Tor Browser so we might want to keep an eye on the developments here.
Also, I think I am finally beginning to see the brilliance in the privacy budget proposal (https://github.com/bslassey/privacy-budget).
To the best of my knowledge, current mainstream browsers use some combination of hardcoded lists of companies (like the disconnect list used by Firefox's Enhanced Tracking Protection), hardcoded lists of scripts (like uBlock Origin), and heuristics (from the relatively simple Privacy Badger to the more complicated Intelligent Tracking Prevention in Safari). These approaches are oddly reminiscent of old-school signature-based AV and they don't really attack the core problem but try to identify bad scripts. For starters, as AV experience has taught us, these approaches might work for mass attacks but don't work well against targeted attacks. So these approaches, while they allow for mainstream browser-level usability, don't seem compatible with the threat model of the Tor Browser.
This is where IMO the privacy budget shines. It goes straight to the problem: APIs that reveal potentially identifying information and keeps a ledger of the calls. This is more conservative than the above approaches and seems closer to the Tor Browser's current approach (which is to block or spoof the outputs of potentially identifying APIs). Of course, this comes with the same false positive issue that currently affects the Tor Browser, but hey we can't have everything. Moreover, this seems to lend itself nicely to a good affordance where the browser can easily expose to the user how much data they are leaking to the webpage and if the webpage exceeds the set budget and requests more API calls, how much more data they would expose by allowing these calls.
Right now, the Tor Browser's fingerprinting protection is all-or-nothing and it might soon become per-origin all-or-nothing (see Mozilla Bug 1450398), privacy budget might be a nice next step where a user can allow access to some fingerprinting surface without potentially being completely identifiable.
What do you folks think?
Best,
Sanketh
tbb-dev mailing list tbb-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tbb-dev
_______________________________________________ tbb-dev mailing list tbb-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tbb-dev