Hello Everyone,
I recently inadvertently opened a much larger can of worms than I'd
intended when fixing a bug reported downstream where cURL would,
when configured with certain DNS backends, fail to resolve .onion
addresses.
https://bugs.gentoo.org/887287
After doing some digging I discovered that the c-ares library was
updated in 2018 to intentionally fail to resolve .onion addresses
in line with RFC 7686, and another reported 'Bug' in cURL for
leaking .onion DNS requests:
https://github.com/c-ares/c-ares/issues/196https://github.com/curl/curl/issues/543
I took the obviously sane and uncontroversial approach of making
sure that cURL would always behave the same way regardless of the
DNS backend in use, and that it would output a useful error message
when it failed to resolve a .onion address.
Unfortunately, this has made a lot of people very angry and been
~~widely regarded as a bad move~~panned by a small subset of
downstream cURL users:
https://github.com/curl/curl/discussions/11125https://infosec.exchange/@harrysintonen/110977751446379372https://gitlab.torproject.org/tpo/core/torspec/-/issues/202
I accept that, in particular, transproxy users are being inconvenienced,
but I also don't want to go back to 'cURL leaks .onion DNS requests
_sometimes_'. As a career sysadmin and downstream bug triager: this
is the stuff that keeps me up late at night. Quite literally, far too
often.
I have found, however that the downstreams that I expected to be
inconvenienced
most (Whonix and Tails) simply use socks:
https://github.com/Kicksecure/sdwdate/commit/5724d83b258a469b7a9a7bbc651539…https://github.com/Kicksecure/tb-updater/commit/d040c12085a527f4d39cb1751f2…https://github.com/Kicksecure/usability-misc/blob/8f722bbbc7b7f2f3a35619a5a…https://gitlab.tails.boum.org/tails/tails/-/issues/19488https://gitlab.tails.boum.org/tails/tails/-/merge_requests/1123
I've asked in the Tor Specifications issue (inspired by Silvio's
suggestions), and in the cURL issue, but I seem to be getting nowhere
and the impacted users are clamouring for a quick band-aid solution,
which I feel will work out worse for everyone in the long run:
>How can client applications (safely):
>
>1.discover that they're in a Tor-enabled environment
>2.resolve onion services only via Tor in that circumstance
>3.not leak .onion resolution attempts at all
>
>Right now, not making these requests in the first place is the
>safest (and correct) thing to do, however inconvenient it may be.
>Rather than immediately trying to come up with a band-aid approach
>to this problem, a sane mechanism needs to be implemented to:
>
>1.prevent each application from coming up with their own solution
>2.prevent inconsistency in .onion resolution (i.e. no "oh it only
>leaks if DO_ONION_RESOLUTION is set")
>3.provide a standardised mechanism for applications that want to be Tor
>aware to discover that they're in a Tor-enabled environment.
I'm not particularly attached to that last point, but it's worth discussing.
On a related note:
-is the use of a transparent proxy recommended?
-is there a sane alternative that involves as minimal configuration
as possible for these users?
I'm not sure what the best way forward is here, but I'm hoping that
actual Tor developers might have a useful opinion on the matter, or
at least be able to point me in the right direction.
Thanks for your time,
Cheers,
Matt
```
Filename: 350-remove-tap.md
Title: A phased plan to remove TAP onion keys
Author: Nick Mathewson
Created: 31 May 2024
Status: Open
```
## Introduction
Back in [proposal 216], we introduced the `ntor` circuit extension handshake.
It replaced the older `TAP` handshake, which was badly designed,
and dependent on insecure key lengths (RSA1024, DH1024).
With the [final shutdown of v2 onion services][hsv2-deprecation],
there are no longer any supported users of TAP anywhere in the Tor protocols.
Anecdotally, a relay operator reports that fewer than
one handshake in 300,000 is currently TAP.
(Such handshakes are presumably coming from long-obsolete clients.)
Nonetheless, we continue to bear burdens from TAP support.
For example:
- TAP keys compose a significant (but useless!) portion
of directory traffic.
- The TAP handshake requires cryptographic primitives
used nowhere else in the Tor protocols.
Now that we are implementing [relays in Arti],
the time is ripe to remove TAP.
(The only alternative is to add a useless TAP implementation in Arti,
which would be a waste of time.)
This document outlines a plan to completely remove the TAP handshake,
and its associated keys, from the Tor protocols.
This is, in some respects, a modernized version of [proposal 245].
## Migration plan
Here is the plan in summary;
we'll discuss each phase below.
- Phase 1: Remove TAP dependencies
- Item 1: Stop accepting TAP circuit requests.
- Item 2: Make TAP keys optional in directory documents.
- Item 3: Publish dummy TAP keys.
- Phase 2: After everybody has updated
- Item 1: Allow TAP-free routerdescs at the authorities
- Item 2: Generate TAP-free microdescriptors
- Phase 3: Stop publishing dummy TAP keys.
Phase 1 can begin immediately.
Phase 2 can begin once all supported clients and relays have upgraded
to run versions with the changes made in Phase 1.
Phase 3 can begin once all authorities have made the changes
described in phase 2.
### Phase 1, Item 1: Stop accepting TAP circuit requests.
(All items in phase 1 can happen in parallel.)
Immediately, Tor relays should stop accepting TAP requests.
This includes all CREATE cells (not CREATE2),
and any CREATE2 cell whose type is TAP (0x0000).
When receiving such a request,
relays should respond with DESTROY.
Relays MAY just drop the request entirely, however,
if they find that they are getting too many requests.
Such relays must stop reporting `Relay=1`
among their supported protocol versions.
(This protocol version is not currently required or recommended.)
> If this proposal is accepted,
> we should clarify the protocol version specification
> to say that `Relay=1` specifically denotes TAP.
### Phase 1, Item 2: Make TAP keys optional in directory documents.
(All items in phase 1 can happen in parallel.)
In C tor and Arti, we should make the `onion-key` entry
and the `onion-key-crosscert` entry optional.
(If either is present, the other still must be present.)
When we do this, we should also modify the authority code
to reject any descriptors that do not have these fields.
(This will be needed so that existing Tor instances do not break.)
In the microdescriptor documents format, we should make
the _object_ of the `onion-key` element optional.
(We do not want to make the element itself optional,
since it is used to delimit microdescriptors.)
We use new protocol version flags (Desc=X, Routerdesc=Y)
to note the ability to parse these documents.
### Phase 1, Item 3: Publish dummy TAP keys
(All items in phase 1 can happen in parallel.)
Even after step 2 is done,
many clients and relays on the network
will still require TAP keys to be present in directory documents.
Therefore, we can't remove those keys right away.
Relays, therefore, must put _some_ kind of RSA key
into their `onion-key` field.
I'll present three designs on what relays should do.
We should pick one.
> #### Option 1 (conservative)
>
> Maybe, we should say that a relay
> should generate a TAP key, generate an onion-key-crosscert,
> and then discard the private component of the key.
> #### Option 2 (low-effort)
>
> In C tor, we can have relays simply proceed as they do now,
> maintaining TAP private keys and generating crosscerts.
>
> This has little real risk beyond what is described in Option 1.
> #### Option 3 (nutty)
>
> We _could_ generate a global, shared RSA1024 private key,
> to be used only for generating onion-key-crosscerts
> and placing into the onion-key field of a descriptor.
>
> We would say that relays publishing this key MUST NOT
> actually handle any TAP requests.
>
> The advantage of this approach over Option 1
> would be that we'd see gains in our directory traffic
> immediately, since all identical onion keys would be
> highly compressible.
>
> The downside here is that any client TAP requests
> sent to such a relay would be decryptable by anybody,
> which would expose long-obsolete clients to MITM attacks
> by hostile guards.
We would control the presence of these dummy TAP keys
using a consensus parameter:
`publish-dummy-tap-key` — If set to 1, relays should include a dummy TAP key
in their routerdescs. If set to 0, relays should omit the TAP key
and corresponding crosscert. (Min: 0, Max, 1, Default: 0.)
We would want to ensure that all authorities voted for this parameter as "1"
before enabling support for it at the relay level.
### Phase 2, Item 1: Allow TAP-free routerdescs at the authorities
Once all clients and relays have updated to a version
where the `onion-key` router descriptor element is optional
(see phase 1, item 2),
we can remove the authority code that requires
all descriptors to have TAP keys.
### Phase 2, Item 2: Generate TAP-free microdescriptors
Once all clients and descriptors have updated to a version
where the `onion-key` body is optional in microdescriptors
(see phase 1, item 2),
we can add a new consensus method
in which authorities omit the body when generating microdescriptors.
### Phase 3: Stop publishing dummy TAP keys.
Once all authorities have stopped requiring
the `onion-key` element in router descriptors
(see phase 2, item 1),
we can disable the `publish-dummy-tap-key` consensus parameter,
so that relays will no longer include TAP keys in their router descriptors.
[proposal 216]: ./216-ntor-handshake.txt
[proposal 245]: ./245-tap-out.txt
[hsv2-deprecation]: https://support.torproject.org/onionservices/v2-deprecation/
[relays in Arti]: https://gitlab.torproject.org/tpo/team/-/wikis/Sponsor%20141
Hello everyone,
I am a researcher currently looking into different schemes for what you call Keyblinding in the rendevouz spec.
https://spec.torproject.org/rend-spec/keyblinding-scheme.html
I noticed that your description there mentiones a secret `s` to be hashed into the blinding factor, and have a few questions about it:
1. Is this secret currently being used / intended to be used? If so, how?
2. What kinds of security (formally or informally) would you expect from using a secret in the derivation process? For example, do you just require that someone without `s` cannot look up the service, or is this also meant as a way of ensuring that HSDir nodes cannot find correlations between services and descriptors (amounting to some sort of additional censorship resistance)?
The reason I am asking is because my research has identified some potentially post quantum secure schemes which for unknown identity keys results in uncorrelatable blinded keys, but where for known public keys you can efficiently determine whether a blinded key is its derivative, even if you do not know the blinding factor. I am wondering for which kinds of applications (with TOR being a major one) this would be relevant.
If you have any insights, please let me know. Also I am new to the TOR-Dev world, so feel free to send me to a different mailing list, should I have chosen the wrone one for this topic :)
Thanks in advance,
Thomas
--
```
M.Sc. Thomas Bellebaum
Applied Privacy Technologies
Fraunhofer Institute for Applied and Integrated Security AISEC
Lichtenbergstraße 11, 85748 Garching near Munich (Germany)
Tel. +49 89 32299 86 1039
thomas.bellebaum(a)aisec.fraunhofer.de
https://www.aisec.fraunhofer.de
```