What technical properties of the web makes such services impossible to use?
The web is not the right object to reason about here. The more interesting question would be "What techical properties of a service makes it impossible to get used anonymously?" That remains to be researched. At the end, maybe there isn't any (though I doubt that).
Sure, anonymity is by definition impossible for things that require a name. As long as that name can be an ephemeral pseudonym, I think we're good on the web.
But once you start getting into real personal and/or biometric (even just a photo) details, you obviously lose your anonymity set. Again, I think what we're talking about here is "Layer 8".
Sure, as no layer between 1 and 7 is involved it is per definitionem "layer" 8.
I would be glad if that would be the case but I doubt that (having e.g. Facebook in mind).
Can you provide specific concerns about facebook wrt the properties from the blog post?
Not yet, no. I am not a Facebook user and have therefore to look at research papers investigating it. And the things I read e.g. in http://www.research.att.com/~bala/papers/w2sp11.pdf or in http://www.research.att.com/~bala/papers/wosn09.pdf do not seem to break the idea proposed in the blog post. But again, there is research to be done here, I guess. Redirects (you mentioned them already) could pose a serious threat to the approach in the blog post, though (systems like Phorm http://www.cl.cam.ac.uk/~rnc1/080518-phorm.pdf come to my mind).
No, you got me wrong here. The downgrading occurs while designing the anon mode not while using it. There should be just one mode in order to avoid fingerprinting issues. It is merely meant as a design principle for the dev: starting with all we can get and then downgrading our defenses until we reach a good balance between usability and anon features.
Ok. Let's try to do this. Where do we start from? Can we start from the design I proposed and make it more strict?
Yes, good idea.
What do you have in mind in terms of stricter controls?
Hmmm... Dunno what you mean here.
What changes to the design might you propose?
There are basically two points to be mentioned here IMO:
1) Having a tab (window) isolation additionally (see my comments below)
and
2) Having some means to break the linkage between the same domain called more than once in a tab. That would be the best I can imagine and would help against attacks using redirects as well but is hard to get right. E.g. one had to give the user means to fine-tune the default setting to their needs without ending up in a UI nightmare. And there are probably numerous other pitfalls lurking here... We have already done some basic research (we supervised a BA thesis investigating this concerning cookies) but there is still a lot to do. But yes, I would like to have that feature and invest some energy to investigate if one can get it right in a meaningful way.
You're just preventing "accidental" information leakage at the cost of breaking functionality. The ad networks will adapt to this sort of change, and then you're right back where you started in terms of actual tracking, after years of punishing your users with broken sites..
This depends on how one designs the single features. But I got your point.
If you want to suggest how to fine-tune the referer/window.name policy, let's discuss that.
Dunno. I think the smart-spoof functionality is working pretty well. I am not sure if you take special care regarding third party content. We check for this and leave the referer unmodified as an attacker does not gain any information out of it (to the contrary it might be strange if someone does not send a referer while requesting a third party image or an other third party resource).
More broadly, perhaps there is some balance of per-tab isolation and origin isolation that is easily achievable in Firefox?
I hope so (at least if we had a Firefox fork that would not be much of a problem anymore). The Multifox Add-On (http://br.mozdev.org/multifox/all.html) claims to have implemented per tab identities and I have looked at it superficially. It is quite promising and deserves a thorough test.
In my experience, per-tab isolation is extremely hard.
I know.
How much of that have you already implemented?
Nothing yet. Frankly, I have not had the time to do so. But we have good chances to get a research grant in the near future (i.e. next 3-4 months) for the next 2 years and the tab isolation (not only for cookies but for DOM storage and similar stuff as well) and ( 2) mentioned above (we'll see how far I'll get in this regard) are my first (sub)project(s). And even if we do not get that grant implementing at least a tab separation will be my next major task I guess.
Regarding the research grant: I already wrote pde and asked him whether he has some interesting stuff that we should try to incorporate into the application. If you (Mike) have something don't hesitate and drop me a mail. We still have the opportunity to move the things we already have a bit around to get something we overlooked into our proposal (the deadline is end of July). The topic is investigating and solving issues regarding an anonymous browser (profile) and to develop one that is resilient to e.g. different fingerprinting attacks and tracking means in general.
Georg