The upstream obfs4 repository has a fix to the Elligator2 public key
representative leak (https://github.com/agl/ed25519/issues/27).
https://gitlab.com/yawning/obfs4/-/commit/393aca86cc3b1a5263018c10f87ece09a…
All releases prior to this commit are trivially distinguishable
with simple math, so upgrading is strongly recommended. The
upgrade is fully backward-compatible with existing
implementations, however the non-upgraded side will emit traffic
that is trivially distinguishable from random.
The file internal/README.md elaborates:
All existing versions prior to the migration to the new code
(anything that uses agl's code) are fatally broken, and trivial
to distinguish via some simple math. For more details see Loup
Vaillant's writings on the subject. Any bugs in the
implementation are mine, and not his.
Representatives created by this implementation will correctly be
decoded by existing implementations. Public keys created by this
implementation be it via the modified scalar basepoint multiply
or via decoding a representative will be somewhat non-standard,
but will interoperate with a standard X25519 scalar-multiply.
As the obfs4 handshake does not include the decoded
representative in any of it's authenticated handshake digest
calculations, this change is fully-backward compatible (though
the non-upgraded side of the connection will still be trivially
distinguishable from random).
We'll meet tomorrow, 2022-01-11 at 16:00 to cooperatively set up a
staging Snowflake bridge. Let's meet in #tor-dev and we can move from
there if needed. I am anticipating spending 2 hours or less on this.
https://bugs.torproject.org/tpo/anti-censorship/pluggable-transports/snowfl…
Here is a pad with an agenda and a place to draft an installation guide.
https://pad.riseup.net/p/pvKoxaIcejfiIbvVAV7j
If you have SSH and sudo access on the existing Snowflake bridge, you
should already have the same on the staging server. I am thinking that
we can share a screen session and work through the installation steps,
keeping notes in the pad.
There has been a near-total Internet shutdown in Kazakhstan since
2022-01-05, with only a few hours of partial connectivity per day. From
correspondence with some of the people affected, it appears that, for
whatever reason, proxies on TCP port 3785 are accessible during the
shutdown, at least on Kaz Telecom, the largest ISP. I set up an obfs4
bridge on port 3785 and a user reported that it was reachable.
It might be a good idea to push to have a few bridges that run on port
3785, at least for the front desk to hand out?
httpe://ntc.party/t/network-shutdown-all-around-kazakhstan/1601
https://github.com/net4people/bbs/issues/99https://ntc.party/t/network-shutdown-all-around-kazakhstan/1601/14
> SOCKS5 proxy 3785 port works fine. Not sure why, VoIP using skype and
> other services works as well, so I guess 3785 may be used for VoIP
>
> in general, it’s easy to configure in telegram, but if clients are
> able to configure proxy on their OS(for example using proxifyer) https
> and all other traffic works as well.
>
> This has been tested in at least 3 regions.
https://ntc.party/t/network-shutdown-all-around-kazakhstan/1601/16
> > I am not familiar with that one either. nmap-services calls it
> > bfd-echo “BFD Echo Protocol”. RFC 5881 says it is a UDP protocol:
>
> Yeah, if it’s not VoIP I have no idea why it works. I guess people
> found it out by brute-forcing different ports
https://ntc.party/t/network-shutdown-all-around-kazakhstan/1601/17
> Here is an obfs4 bridge on port 3785 (IPv4 and IPv6) to try in Tor
> Browser:
https://ntc.party/t/network-shutdown-all-around-kazakhstan/1601/19
> The IPv4 obfs4 bridge is working!
For the Moat and HTTPS distributors, BridgeDB uses a cache of
pregenerated captcha images. It does not generate a fresh captcha for
every challenge.
https://gitlab.torproject.org/tpo/anti-censorship/bridgedb/-/blob/eeca27703…
> ...The second method uses a local cache of pre-made CAPTCHAs,
> created by scripting Gimp using gimp-captcha. The latter
> cannot easily be run on headless server, unfortunately,
> because Gimp requires an X server to be installed.
https://gitlab.torproject.org/tpo/anti-censorship/bridgedb/-/blob/eeca27703…
imageFilename = random.SystemRandom().choice(os.listdir(self.cacheDir))
imagePath = os.path.join(self.cacheDir, imageFilename)
with open(imagePath, 'rb') as imageFile:
self.image = imageFile.read()
It may be that there are simply too few pregenerated captcha images. If
there are N total, and an adversary invests effort to solve n of them,
then the adversary will get a captcha it knows in n / N fraction of
later bridge queries, until the cache of pregenerated images is
regenerated.
I downloaded 1000 captcha images from the Moat API and hashed them:
for a in $(seq 1 1000); do curl -s -x socks5h://127.0.0.1:9050/ https://bridges.torproject.org/moat/fetch -H 'Content-type: application/vnd.api+json' --data-raw '{"data": [{"version": "0.1.0", "type": "client-transports"}]}' | jq '.data[0].image' | sha256sum; done | tee bridgedb.hashes
Out of 1000 images drawn randomly with replacement,
916 appeared 1 time
39 appeared 2 times
2 appeared 3 times
We can use a capture–recapture technique to estimate the total
population size.
https://en.wikipedia.org/wiki/Mark_and_recapture#Lincoln%E2%80%93Petersen_e…
Divide the 1000 images into 2 equal halves, and count the unique images
in each half: n = 488, k = 492. The number of images in the second half
that were already seen in the first half is K = 23. The estimate for
N = n*K/k = 488*492/23 = 10439, so I guess the captcha cache dir on the
BridgeDB server holds only about 10000 images.
>>> pop = list(open("bridgedb.hashes"))
>>> s1, s2 = set(pop[:len(pop)//2]), set(pop[len(pop)//2:])
>>> len(s1)
488
>>> len(s2)
492
>>> len(s1.intersection(s2))
23
>>> len(s1)*len(s2)/len(s1.intersection(s2))
10438.95652173913
It would be best to generate a fresh captcha image for each challenge,
but if that's not possible, we should increase the number of cached
images or regnerate the cache periodically.
Do I understand it right that if I enable the switch on
https://snowflake.torproject.org/ page or install the extension in the
browser, people will connect using my browser (and hence default WebRTC
implementation which won’t be blocked) eventually?
I can ask people to do that.
On 07.12.2021 13:28, meskio wrote:
> Quoting ValdikSS via anti-censorship-team (2021-12-06 18:53:55)
>> There's ongoing Tor block on certain Russian ISPs, which on top of everything
>> else includes Snowflake censorship using DPI.
>>
>> DTLS connection never establishes and gets filtered in
>> ClientHello/ServerHello sequence.
>>
>> Check the attached dumps from Tele2 cellular operator (as12958) made on
>> 3 and 6 December 2021.
>>
>> More information here:
>> https://ntc.party/t/ooni-reports-of-tor-blocking-in-certain-isps-since-2021…
>
> The related issue:
> https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snow…
>
> Thanks for the dumps.
>
Hello eveyone,
There's ongoing Tor block on certain Russian ISPs, which on top of
everything else includes Snowflake censorship using DPI.
DTLS connection never establishes and gets filtered in
ClientHello/ServerHello sequence.
Check the attached dumps from Tele2 cellular operator (as12958) made on
3 and 6 December 2021.
More information here:
https://ntc.party/t/ooni-reports-of-tor-blocking-in-certain-isps-since-2021…
Hello eveyone,
There's ongoing Tor block on certain Russian ISPs, which on top of
everything else includes Snowflake censorship using DPI.
DTLS connection never establishes and gets filtered in
ClientHello/ServerHello sequence.
I wanted to attach the dump but the message got held. Visit this link
for dumps and more information:
https://ntc.party/t/ooni-reports-of-tor-blocking-in-certain-isps-since-2021…
Hello all,
First of all I really appreciate all the work that the anti-censorship team has been putting on building better censorship circumvention tools, and I hope snowflake becomes even more successful than obfs4.
I have a few questions about current developments,
1. What's the status on Snowflake Mobile development? Looks like the last commit was more than 1 year old: https://gitlab.torproject.org/tpo/anti-censorship/pluggable-transports/snow… Is there any roadmap for getting it out?
2. Right now I assume the broker tends to hand out standalone-snowflake IPs first rather than web-ext-snowflake IPs (that's how it was working if I remember correctly, please correct me if I'm wrong). Is this still the case? I'm asking since I noticed a very impressive improvement when it comes to speed with snowflake and I would like to know if it was just due to me ending up with a standalone-snowflake or rather if issues with performance and speed got ironed out for web-ext-snowflakes.
3. The browser addon currently shows "Your snowflake has helped X users circumvent censorship in the last 24 hours". But this might be bad UX, it should be in the last week or even month since even though there are about ~2k snowflake users, the ratio of snowflakes/users is still big enough that a lot of users might be under the impression that they never helped any user/or that the extension isn't working correctly.
4. It might be a good idea to display the data in https://snowflake-broker.torproject.net.global.prod.fastly.net/metrics directly in https://metrics.torproject.org.
Thanks in advance for any answers.