Hello Everyone,
I recently inadvertently opened a much larger can of worms than I'd
intended when fixing a bug reported downstream where cURL would,
when configured with certain DNS backends, fail to resolve .onion
addresses.
https://bugs.gentoo.org/887287
After doing some digging I discovered that the c-ares library was
updated in 2018 to intentionally fail to resolve .onion addresses
in line with RFC 7686, and another reported 'Bug' in cURL for
leaking .onion DNS requests:
https://github.com/c-ares/c-ares/issues/196https://github.com/curl/curl/issues/543
I took the obviously sane and uncontroversial approach of making
sure that cURL would always behave the same way regardless of the
DNS backend in use, and that it would output a useful error message
when it failed to resolve a .onion address.
Unfortunately, this has made a lot of people very angry and been
~~widely regarded as a bad move~~panned by a small subset of
downstream cURL users:
https://github.com/curl/curl/discussions/11125https://infosec.exchange/@harrysintonen/110977751446379372https://gitlab.torproject.org/tpo/core/torspec/-/issues/202
I accept that, in particular, transproxy users are being inconvenienced,
but I also don't want to go back to 'cURL leaks .onion DNS requests
_sometimes_'. As a career sysadmin and downstream bug triager: this
is the stuff that keeps me up late at night. Quite literally, far too
often.
I have found, however that the downstreams that I expected to be
inconvenienced
most (Whonix and Tails) simply use socks:
https://github.com/Kicksecure/sdwdate/commit/5724d83b258a469b7a9a7bbc651539…https://github.com/Kicksecure/tb-updater/commit/d040c12085a527f4d39cb1751f2…https://github.com/Kicksecure/usability-misc/blob/8f722bbbc7b7f2f3a35619a5a…https://gitlab.tails.boum.org/tails/tails/-/issues/19488https://gitlab.tails.boum.org/tails/tails/-/merge_requests/1123
I've asked in the Tor Specifications issue (inspired by Silvio's
suggestions), and in the cURL issue, but I seem to be getting nowhere
and the impacted users are clamouring for a quick band-aid solution,
which I feel will work out worse for everyone in the long run:
>How can client applications (safely):
>
>1.discover that they're in a Tor-enabled environment
>2.resolve onion services only via Tor in that circumstance
>3.not leak .onion resolution attempts at all
>
>Right now, not making these requests in the first place is the
>safest (and correct) thing to do, however inconvenient it may be.
>Rather than immediately trying to come up with a band-aid approach
>to this problem, a sane mechanism needs to be implemented to:
>
>1.prevent each application from coming up with their own solution
>2.prevent inconsistency in .onion resolution (i.e. no "oh it only
>leaks if DO_ONION_RESOLUTION is set")
>3.provide a standardised mechanism for applications that want to be Tor
>aware to discover that they're in a Tor-enabled environment.
I'm not particularly attached to that last point, but it's worth discussing.
On a related note:
-is the use of a transparent proxy recommended?
-is there a sane alternative that involves as minimal configuration
as possible for these users?
I'm not sure what the best way forward is here, but I'm hoping that
actual Tor developers might have a useful opinion on the matter, or
at least be able to point me in the right direction.
Thanks for your time,
Cheers,
Matt
Hello everyone,
I am a researcher currently looking into different schemes for what you call Keyblinding in the rendevouz spec.
https://spec.torproject.org/rend-spec/keyblinding-scheme.html
I noticed that your description there mentiones a secret `s` to be hashed into the blinding factor, and have a few questions about it:
1. Is this secret currently being used / intended to be used? If so, how?
2. What kinds of security (formally or informally) would you expect from using a secret in the derivation process? For example, do you just require that someone without `s` cannot look up the service, or is this also meant as a way of ensuring that HSDir nodes cannot find correlations between services and descriptors (amounting to some sort of additional censorship resistance)?
The reason I am asking is because my research has identified some potentially post quantum secure schemes which for unknown identity keys results in uncorrelatable blinded keys, but where for known public keys you can efficiently determine whether a blinded key is its derivative, even if you do not know the blinding factor. I am wondering for which kinds of applications (with TOR being a major one) this would be relevant.
If you have any insights, please let me know. Also I am new to the TOR-Dev world, so feel free to send me to a different mailing list, should I have chosen the wrone one for this topic :)
Thanks in advance,
Thomas
--
```
M.Sc. Thomas Bellebaum
Applied Privacy Technologies
Fraunhofer Institute for Applied and Integrated Security AISEC
Lichtenbergstraße 11, 85748 Garching near Munich (Germany)
Tel. +49 89 32299 86 1039
thomas.bellebaum(a)aisec.fraunhofer.de
https://www.aisec.fraunhofer.de
```
Dear Tor Project Developers,
I hope this email finds you well. I am writing to share with you a project [1] I have been working on called Tor Watchdog Bot [2], and I believe it may be of interest to you.
Tor Watchdog Bot is a Telegram bot designed to monitor the status of Tor relays and notify users when relays go offline. The intent behind creating this bot was to develop a user-friendly tool that allows anyone to easily keep track of their Tor nodes.
I am fully aware that the bot is not currently ready for public release as it requires several bug fixes and improvements. Before proceeding with further development, I wanted to ensure that this project aligns with your interests and goals. In order to prioritize user privacy and security, I did not activate the backend feature with the actual functionality; instead, I activated a demo that always provides the same results but allows users to understand the type of response they would receive if it were active.
In the README.md file, you will find detailed instructions on how to set up and test the bot. If you find this project interesting and believe it could be beneficial to the Tor community, I would be delighted to see Tor Project host this service. However, I understand the importance of avoiding any form of user profiling, so I want to ensure that any potential hosting solution prioritizes user privacy.
The code for Tor Watchdog Bot is released under the GPLv3 license, so you are libre to use and modify it as you see fit. I would be thrilled to contribute to the development of this project if it aligns with your vision.
Thank you for considering this proposal. I look forward to hearing your thoughts and feedback.
Best regards,
Aleff.
[1] https://github.com/aleff-github/TorWatchdog
[2] https://t.me/TorWatchdog_bot
---
Browse my WebSite: aleff-gitlab.gitlab.io
Use my PGP Public Key: pgp.mit.edu/pks/lookup?op=get&search=0x7CFCE404A2168C85
Join to support:
- Free Software Foundation! (my.fsf.org/join?referrer=6202114)
- Electronic Frontier Foundation! (eff.org)
- Tor-Project (torproject.org)
- Signal (signal.org)