Inspired by ioerror's "freenote" (https://github.com/ioerror/freenote), I started toying with gstreamer for 1:1 encrypted, anonymous voice chat. So far, the result is an (experimental) command for "carml" that does just that.
At a very high level, you are either the "initiator" or the "responder". The initiator runs "carml voicechat" which launches a hidden-service. The responder runs something like "carml voicechat --client tor:blargfoo.onion" which connects to said hidden-service.
This then essentially cross-connects the mic + speakers of each side via an Opus + OGG stream over a single Tor TCP connection.
It's NOT FOR REAL USE at all yet. And in fact, you'll have to know a bit about Python and Git to get it to "go" for right now, but I'd love for someone with some gstreamer and/or OGG experience to help me:
1. What do I tell people to do so that "autoaudiosrc" and "autoaudiosink" will work? (e.g. "use my *other* mic, please") 2. Is ^^^ the right way to go, or do I need to accept some gstreamer string representing their device? 3. What's the best way to use gstreamer from Python on Debian/Ubuntu? 4. Does this have any hope of working on Mac? (Windows I'm presuming is "no way") 5. Are the Opus options I'm using really resulting in a fixed-rate codec? 6. Is this just a completely terrible idea and I should stop?
Also, I've only tried this on Debian so far. You need to "apt-get install python-gst0.10" and use a virtualenv with --system-site-packages because the gstreamer python bindings aren't pip installable. (In the end, I'll probably just spawn "gst-launcher" processes instead and not depend on the Python bindings at all).
To try it out, clone the carml repository (git clone https://github.com/meejah/carml.git) and check out the "voicechat" branch. Then:
sudo apt-get install python-gst0.10 virtualenv --system-site-packages venv ./venv/bin/pip install --editable . ./venv/bin/carml voicechat
"Use the source": search for "autoaudiosrc" or "autoaudiosink" and change as required for your setup. You can also use "audiotestsrc" to get an amazingly annoying continuous tone. But at least you know it's working ;)
(Note it's also better to use txtorcon's master, as that now correctly waits until one descriptor is uploaded when launching hidden services.)
For faster testing: near the bottom of voicechat.py you'll see some "if False:" statements you can change to "if True:" and get plain (PUBLIC!) TCP listeners instead of using Tor.
I'm not really sure if this is very generally useful, but at least there are an entertainingly high number of local TCP streams involved :/
Thanks,
I was sent some suggestions for this off-list, and Vincent said I could post it here.