(resending to tor-dev with tp.o email address)
On 07/08/2014 03:42 AM, Yan Zhu wrote:
> On 07/08/2014 12:07 AM, Jeroen Massar wrote:
>> On 2014-07-07 20:40, Red wrote:
>> [.. lots of cool work being worked on ..]
>>
>> Hi Zack,
>>
>> Seems you are doing lots of cool stuff ;)
>>
>> But I am one of those strange people who really hate it that every
>> separate tool has their own updater (which can be used for tracking a
>> user, as the set of updater tools polling servers makes a fingerprint in
>> the same way other flows make a fingerprint).
>
> Hi Jeroen,
>
> This makes a lot of sense. I'm aware of the fingerprintability concern,
> and EFF tech projects generally try to mitigate it by polling the update
> servers at randomized intervals over fresh Tor circuits if possible. For
> this project, we initially proposed polling for an update when the
> browser starts and every 3 hours plus some random, evenly-distributed
> number of milliseconds between 0 and 300000. I'm curious if others have
> more refined suggestions!
>
>>
>> And thus I run Little Snitch and block those updates. Till I deem it a
>> good time for the update to be done and trigger it manually.
>>
>> As such, when you get to the stage of adding features, it would be good
>> if there was:
>> - an option to disable the auto fetching
>
> Yes, this would be fairly easy to add.
>
>> - an option to trigger the fetching
>
> Probably also easy.
>
>> - to feed the update mechanism with a pre-fetched file
>> (eg provided through a different update mechanism)
>
> Since the update mechanism is just an XHR that downloads a new ruleset
> library from a hardcoded static URL and replaces the existing one in the
> Firefox profile directory, you could fetch-and-replace this manually via
> any number of mechanisms. :)
>
> Also, the ruleset libraries will still ship with extension updates, so
> you could disable ruleset updates and just wait for the next HTTPS
> Everywhere release.
>
> -Yan
>
>>
>> Greets,
>> Jeroen
>>
>> _______________________________________________
>> tor-dev mailing list
>> tor-dev(a)lists.torproject.org
>> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
>>
>
>
>
>
> _______________________________________________
> HTTPS-Everywhere mailing list
> HTTPS-Everywhere(a)lists.eff.org
> https://lists.eff.org/mailman/listinfo/https-everywhere
>
--
Yan Zhu <yan(a)eff.org>, <yan(a)torproject.org>
Staff Technologist
Electronic Frontier Foundation https://www.eff.org
815 Eddy Street, San Francisco, CA 94109 +1 415 436 9333 x134
(resending to tor-dev with tp.o email address)
On 07/08/2014 03:30 AM, Yan Zhu wrote:
> On 07/08/2014 02:55 AM, Ben Laurie wrote:
>> On 7 July 2014 19:40, Red <redwire(a)riseup.net> wrote:
>>> Despite the fact that the process for producing the signature in
>>> question[2] seemed to work fine- Openssl was able to generate and verify
>>> the signature, the testing code calling the verifyData[3] function used
>>> for verification was returning an undocumented NS_ERROR_FAILURE
>>> exception. I had spent a great deal of time asking for support in
>>> relevant Firefox extension development IRC channels, reading source code
>>> from unit tests for the nsIDataSignatureVerifier component, and
>>> experimenting with alternative openssl commands in order to try to
>>> figure out why this error was occurring.
>>
>> Looking at the pk1sign source, it looks like the signature needs to be
>> in base64. Was that what you were using?
>>
>> Do you have a test case that fails using command line tools?
>
> I think Zack's original failing test case was generated via something like:
> $ openssl rsautl -sign -in update.digest -out signtmp.sig -inkey privkey.pem
> $ openssl base64 -in signtmp.sig -out update.json.sig
>
> as described in the original spec that we wrote:
> https://github.com/redwire/https-everywhere/blob/makeJSONManifest/doc/updat…
>
> Here is the diff between the failing test and the passing test:
> https://github.com/redwire/https-everywhere/commit/8b3c85d9d90d679e8b699701….
> I generated the data for the passing test with pk1sign.
>
> The documentation for nsIDataSignatureVerifier does not really describe
> the expected data format for the signature [1], so it took a while to
> figure out that it expects a very specialized form [2].
>
> [1]
> https://developer.mozilla.org/en-US/docs/Mozilla/Tech/XPCOM/Reference/Inter…
> [2] https://bugzilla.mozilla.org/show_bug.cgi?id=685852#c0
>
>
>> _______________________________________________
>> tor-dev mailing list
>> tor-dev(a)lists.torproject.org
>> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
>>
>
>
>
>
> _______________________________________________
> HTTPS-Everywhere mailing list
> HTTPS-Everywhere(a)lists.eff.org
> https://lists.eff.org/mailman/listinfo/https-everywhere
>
--
Yan Zhu <yan(a)eff.org>, <yan(a)torproject.org>
Staff Technologist
Electronic Frontier Foundation https://www.eff.org
815 Eddy Street, San Francisco, CA 94109 +1 415 436 9333 x134
Hello everyone,
I would like to quickly summarize what has been going on for the last
couple of weeks of my work. Due to my mentor, Yan, being away on
travel, we haven't had time to have a formal weekly meeting on IRC as
per usual. We have, however, been emailing on the HTTPS Everywhere
mailing list frequently to discuss some of the problems I've been
working on last week. We have also compiled a list of things I am to
work on for the next week, which I will summarize here.
For the past week, I have been grappling with a problem pertaining to
digital signature verification using the nsIDataSignatureVerifier XPCOM
component designed to handle this task. I had written a few tests[1]
that would use the testing mechanism that Yan built to make sure that
the hashing and signature verification my project would rely on for
security was working. The testing mechanism was built after our last
meeting the Friday before last, during which time we realized we could
write a separate Firefox extension using the addon SDK (and thus,
Jetpack's testing suite) and import the HTTPS Everywhere component into
that extension for testing purposes.
Despite the fact that the process for producing the signature in
question[2] seemed to work fine- Openssl was able to generate and verify
the signature, the testing code calling the verifyData[3] function used
for verification was returning an undocumented NS_ERROR_FAILURE
exception. I had spent a great deal of time asking for support in
relevant Firefox extension development IRC channels, reading source code
from unit tests for the nsIDataSignatureVerifier component, and
experimenting with alternative openssl commands in order to try to
figure out why this error was occurring.
Yan was able to get the test to pass by generating a key and signature
using the NSS tools. However, she has said that this process is more
involved than we would like, and is not likely feasible to accomplish on
EFF's airgapped machine hosting its offline private signing key due to
the lack of availability of NSS tools. To overcome this limitation, I
will be porting one of either the Uhura tool that has been used in the
past, or the pk1sign program[4] Yan had found referenced in a Bugzilla
report.
For now, that's what I'll be doing. I am feeling optimistic that, once
this issue of generating an appropriate (i.e. verifiable by
nsIDataSignatureVerifier) signature is resolved, writing tests for and
refactoring the secure update mechanism I am building will be complete
before long.
I have compiled some of the discussion Yan and I have had via email into
my weekly meeting notes[5], despite there having not been an actual
meeting. As usual, I welcome any advice and input!
Cheers,
Zack
[1]:
https://github.com/redwire/https-everywhere/blob/feature/tests/https-everyw…
[2]:
https://github.com/redwire/https-everywhere/blob/rulesetUpdating/doc/update…
[3]:
https://developer.mozilla.org/en-US/docs/Mozilla/Tech/XPCOM/Reference/Inter…
[4]:
http://dxr.mozilla.org/mozilla-central/source/security/nss/cmd/pk1sign/pk1s…
[5]: https://gist.github.com/redwire/b62f03905a826e79947a#week-8
Maybe we can do away with issue 9971 and improve readability:
https://trac.torproject.org/projects/tor/ticket/9971
As response to this mail, I'll append seperate patches for the 3
sub-issues mentioned. They apply (and are build-tested) seperately to
the current tree, so a maintainer can quickly pick one of them up, if
happy with it.
There shouldn't be a conflict in *1* and *2* but I could append a
patchset anyway.
sub-issue *3* is *only* a call for advice and will be written on top of
the other ones, after discussion.
1* rename entry_guard_t's made_contact to used_so_save_if_down
I think that's readable. Is that about what arma had in mind?
2* rename for_discovery argument of add_an_entry_guard() to
I like probationary more than provisional. Those 2 are suggested in
issue 9971.
I chose forced_probationary for now, because isn't it strictly a
suboptimal situation
in terms of desired 'grade of anonymity'. What do you think?
3* NEEDS REVIEW FIRST: regarding the int arguments of
add_an_entry_guard(). I look at:
node_t *chosen is a node to add.
prepend is set if the guard should become first in the
list?
there are 2 users of add_an_entry_guard() that pass it a chosen
node. One is a bridge (prepend) and the other one is a user-defined
node (!prepend), so:
Given the fact that the list is not supposed to be long and the two
'users' are somewhat similar, why not prepend the node if explicitly
given?
(why doesn't git send-email work for this list?)
this is UNTESTED and for now, more of a question if this may suffice to
close ticket 4019:
--
Martin Kepplinger
http://martinkepplinger.com
This is mostly David Fifield's words from an email exchange.
---
I re-read proposal 203 the other day and wondered how it was related to
the meek pluggable transport. As I might not be the only one, I thought
it could be worthwhile to share David's answer. Feel free to improve!
proposals/203-https-frontend.txt | 28 ++++++++++++++++++++++++++++
1 file changed, 28 insertions(+)
diff --git a/proposals/203-https-frontend.txt b/proposals/203-https-frontend.txt
index 26101b3..df30cd5 100644
--- a/proposals/203-https-frontend.txt
+++ b/proposals/203-https-frontend.txt
@@ -245,3 +245,31 @@ Side note: What to put on the webserver?
"Something to add to your HTTPS website" rather than as a standalone
installation.
+Related work:
+
+ meek [1] is a pluggable transport that uses HTTP for carrying bytes
+ and TLS for obfuscation. Traffic is relayed through a third-party
+ server (Google App Engine). It uses a trick to talk to the third
+ party so that it looks like it is talking to an unblocked server.
+
+ meek itself is not really about HTTP at all. It uses HTTP only
+ because it's convenient and the big Internet services we use as cover
+ also use HTTP. meek uses HTTP as a transport, and TLS for
+ obfuscation, but the key idea is really "domain fronting," where it
+ appears to the censor you are talking to one domain (www.google.com),
+ but behind the scenes you are talking to another
+ (meek-reflect.appspot.com). The meek-server program is an ordinary
+ HTTP (not necessarily even HTTPS!) server, whose communication is
+ easily fingerprintable; but that doesn't matter because the censor
+ never sees that part of the communication, only the communication
+ between the client and CDN.
+
+ One way to think about the difference: if a censor (somehow) learns
+ the IP address of a bridge as described in this proposal, it's easy
+ and low-cost for the censor to block that bridge by IP address. meek
+ aims to make it much more expensive: even if you know a domain is
+ being used (in part) for circumvention, in order to block it have to
+ block something important like the Google frontend or CloudFlare
+ (high collateral damage).
+
+1. https://trac.torproject.org/projects/tor/wiki/doc/meek
--
1.7.10.4
Hello devs,
I'm continuously tweaking the Metrics Portal [0] in the attempt to make
it more useful. My latest idea is to finally spin off the Directory
Archive part from it, which is the part that serves descriptor tarballs.
I'd like to hear what people think about that.
Let me give you some more context. The Metrics Portal serves three main
purposes:
1. Graphs [1]: there are graphs on network size, network diversity,
user number estimates, and performance measurements. This is probably
what most visitors are interested in. In addition to graphs, there are
also tables and .csv files for download.
2. Research [2]: we offer descriptor tarballs for download and explain
the data formats. This is mostly interesting for researchers and
developers. (This is the part that I'd like to spin off and move to a
separate place.)
3. Status [3]: we provided (and still provide) services that are based
on archived descriptors. This includes ExoneraTor and Consensus Health,
both of which have moved to their own sites, and it includes Relay
Search which is high on the list of endangered services. While we (did)
provide these services on the Metrics Portal, they're almost gone.
So, my plan is to move away the Research part, number 2 above, and
re-organize the remaining Graphs part to address visitors who are not
necessarily researchers or developers. The new Directory Archive
website would contain the following content:
- Start page: possibly re-using parts from the current start page [0].
- Data: How to obtain the data, possibly re-using parts from the
current Data page [4], but without separate pages for file lists.
- Formats: similar to the current Data Formats page [5].
- Tools: similar to the current Tools page [6].
What would be a good name for the new website holding the Tor Directory
Archive? How about:
- CollecTor, collector.torproject.org (not available yet) or
- AggregaTor, aggregator.torproject.org (not available yet)
On the software side, I'd like to remove all dynamic (Java) parts from
the new website and have it served by Apache alone instead of Tomcat.
The only parts that still need to be dynamically generated would be file
lists, and I'd like to solve that by using Apache directory listings or
some other Apache module.
The re-organized Metrics Portal is going to have the following content:
- Start page: possibly re-using parts from the current start page [0].
- Graphs: all four sub pages from the current Graphs page [1], so
Network, Bubbles, Users, and Performance.
- Aggregated data: the processed data behind graphs that is currently
available on the Statistics page [7].
Speaking of, if anybody wants to help design the new website (or help
re-design the existing Metrics Portal website once the cruft is gone),
your help would be much appreciated. Bonus points if no JavaScript is
required, at least for the Directory Archive website. Please contact me
if you're interested.
Any feedback welcome! Thanks!
All the best,
Karsten
[0] https://metrics.torproject.org/
[1] https://metrics.torproject.org/graphs.html
[2] https://metrics.torproject.org/research.html
[3] https://metrics.torproject.org/status.html
[4] https://metrics.torproject.org/data.html
[5] https://metrics.torproject.org/formats.html
[6] https://metrics.torproject.org/tools.html
[7] https://metrics.torproject.org/stats.html