Here's the summary of meek's CDN fees for December 2014. Earlier reports:
https://lists.torproject.org/pipermail/tor-dev/2014-August/007429.htmlhttps://lists.torproject.org/pipermail/tor-dev/2014-October/007576.htmlhttps://lists.torproject.org/pipermail/tor-dev/2014-November/007716.htmlhttps://lists.torproject.org/pipermail/tor-dev/2014-December/007916.html
If you're just tuning in, meek is a pluggable transport introduced a few
months ago. https://trac.torproject.org/projects/tor/wiki/doc/meek.
App Engine + Amazon + Azure = total by month
February 2014 $0.09 + -- + -- = $0.09
March 2014 $0.00 + -- + -- = $0.00
April 2014 $0.73 + -- + -- = $0.73
May 2014 $0.69 + -- + -- = $0.69
June 2014 $0.65 + -- + -- = $0.65
July 2014 $0.56 + $0.00 + -- = $0.56
August 2014 $1.56 + $3.10 + -- = $4.66
September 2014 $4.02 + $4.59 + $0.00 = $8.61
October 2014 $40.85 + $130.29 + $0.00 = $171.14
November 2014 $224.67 + $362.60 + $0.00 = $587.27
December 2014 $326.81 + $417.31 + $0.00 = $744.12
--
total by CDN $600.63 + $917.89 + $0.00 = $1518.52 grand total
Usage was up in December compared to November, but the relative increase
wasn't as great as between November and October.
https://metrics.torproject.org/userstats-bridge-transport.html?graph=userst…
(The last few days are missing; see
https://lists.torproject.org/pipermail/tor-dev/2014-December/008021.html.)
Rather than attach bandwidth screenshots from the CDN control panels,
this month I'll just link to the Globe pages for the different backends.
A different relay is used depending on what backend you choose:
meek-google: https://globe.torproject.org/#/bridge/88F745840F47CE0C6A4FE61D827950B06F9E4…
meek-amazon: https://globe.torproject.org/#/bridge/3FD131B74D9A96190B1EE5D31E91757FADA1A…
meek-azure: https://globe.torproject.org/#/bridge/AA033EEB61601B2B7312D89B62AAA23DC3ED8…
It's interesting that meek-google has more bandwidth (800 KB/s) but
fewer clients (200), while meek-amazon has less bandwidth (550 KB/s) but
more clients (340). meek-azure is behind both with 200 KB/s and 150
clients. I also attached screenshots of the bandwidth graphs from those
pages.
As a step toward reducing the bandwidth bills, this month I closed
#12778, which reduces overhead by ensmallening HTTP headers.
https://trac.torproject.org/projects/tor/ticket/12778
Headers are now less than half the size compared to before, which I
estimate will save about 3% on bandwidth. We won't observe any effect,
however, until there's a new release of Tor Browser.
We're starting to move quite a bit of traffic. Not counting meek-azure,
it's over 3.6 TB last month. (That's including an estimated 6–7% of
meek-induced overhead.)
== App Engine a.k.a. meek-google ==
Most of the App Engine bill is still using an existing credit. Of the
total of $326.81, $230.00 was covered by credits.
Here is how the costs broke down:
2132 GB $255.86
1419 instance hours $70.95
The attached meek-google-costs-2015-01-01.png shows the breakdown of
costs between bandwidth and instance hours for the last few months.
== Amazon a.k.a. meek-amazon ==
Asia Pacific (Singapore) 105M requests $126.52 783 GB $101.09
Asia Pacific (Sydney) 183K requests $0.23 1 GB $0.18
Asia Pacific (Tokyo) 25M requests $30.28 146 GB $17.83
EU (Ireland) 57M requests $68.81 481 GB $37.23
South America (Sao Paulo) 1M requests $2.49 7 GB $1.58
US East (Northern Virginia) 19M requests $18.69 161 GB $12.39
--
total 207M requests $247.02 1579 GB $170.30*
* The total from adding up subtotals is $0.01 higher than the actual
bill, I think because of some rounding.
== Azure a.k.a. meek-azure ==
I still can't figure out how to get total monthly bandwidth out of the
Azure console.
David Fifield
Hello Oonitarians,
This is a reminder that today there will be the weekly OONI meeting.
It will happen as usual on the #ooni channel on irc.oftc.net at 18:00
UTC (19:00 CET, 13:00 EST, 10:00 PST).
Everybody is welcome to join us and bring their questions and feedback.
See you later,
~ Arturo
On Fri, Jan 2, 2015 at 12:24 PM, Konstantin Belousov
<kostikbel(a)gmail.com> wrote:
> On Fri, Jan 02, 2015 at 09:09:34AM -0500, grarpamp wrote:
>> Some recent FreeBSD related questions in this app area.
>>
> What is the question ?
>
> As a background, I can repeat that FreeBSD implements syscall-less
> gettimeofday() and clock_gettime() for x86 machines which have
> usable RDTSC. The selection of the timecounter can be verified
> by sysctl kern.timecounter.hardware, and enabled by default fast
> gettimeofday(2) can be checked by sysctl kern.timecounter.fast_gettime.
>
> On some Nehalem machine, I see it doing ~30M calls/sec with enabled
> fast_gettime, and ~6.25M calls/sec with disabled fast_gettime. This is
> measured on 2.8GHz Core i7 930 with src/tools/tools/syscall_timing.
>
> Check your timecounter hardware. Since it was noted that the tests
> were done in VM, check the quality of RDTSC emulation in your hypervisor.
https://lists.torproject.org/pipermail/tor-dev/2015-January/thread.htmlhttp://docs.freebsd.org/mail/current/freebsd-performance.html
Maybe I can just refer non subscribers out to the two lists above that way
in case anyone sees anything interesting they can join/comment as desired.
Background might be that Tor operators have some large relays on *BSD
and were looking to validate, and ways to improve, performance there.
Cheers.
https://lists.torproject.org/pipermail/tor-relays/https://lists.torproject.org/pipermail/tor-talk/
Hi Vlad,
I agree. AWS pricing is not friendly for bandwidth users. However, it
is free for the first year of use and the general population is not
taking advantage of this. Which, AFACT, is one of the points of the
Tor Cloud project. Once you are not free-tier eligible, I highly
recommend using a cheaper service like DO, Linode, Vultr, etc. At
least DO and Linode would be trivial to setup.
In summary, there is no need to replace AWS, but instead to supplement
it with other providers and make the Tor Cloud project more robust.
-Jeremy
On Fri, Jan 2, 2015 at 6:10 PM, Vlad Tsyrklevich <vlad(a)tsyrklevich.net> wrote:
> Hi Jeremy, I've working on something related so I figured I'd comment. I've
> been working on a TorCloud replacement using DigitalOcean's API, this has
> the benefit of a simplified set-up process (one-click set-up) and the
> pricing on bandwidth is a major win over AWS ($5 for 1TB U/L instead of $20
> for 40GB, though your move to t2.micro would move it down to $10.) The
> back-end is there and pretty much ready to go, I've primarily been waiting
> for my co-author to come back from vacation to finish the front-end.
>
> I haven't actually discussed sunsetting/replacing the current TorCloud with
> the Tor Project so this project might as well be vaporware; however, I think
> continuing to use AWS doesn't make sense for pricing and ease-of-use. All
> that said, if you're taking this on, it'd be great to try to also address
> TorCloud's currently susceptibility to discovery by internet-wide port
> scanning as I mentioned in
> https://lists.torproject.org/pipermail/tor-dev/2014-December/007957.html
>
> On Thu Jan 01 2015 at 1:23:49 PM Jeremy Olexa <jolexa(a)jolexa.net> wrote:
>>
>> Hi Everyone, Happy New Year, first post to the -dev list but I've been
>> running some relays for months[1]. Overall a new user to Tor so feel
>> free to point me elsewhere if I'm asking poor questions.
>>
>> I've noticed that the Tor Cloud project is dead in the water right
>> now. The last post on this list is in June 2014[2] and the bugs have
>> been neglected, especially the one I opened[3] which states that tor
>> does not even start! I've seen that there is a new maintainer, but
>> still don't see any [public] activity.
>>
>> There are a couple of issues that have opportunity to be handled
>> better. A large roadblock seems to be building/publishing a new AMI so
>> I've addressed that in a keeping it simple theme:
>> - Use Amazon AMI instead of Ubuntu. Justification: more aligned to AWS
>> and is a first rate citizen in the ecosystem, yum repos, security
>> updates, etc
>> - Use t2.micro instances. Justification: t1.micro (currently in use)
>> are more expensive and less performant than t2.micro
>> - Use cloud-init to fetch ec2-prep.sh and run the script to configure
>> the instance[4]. Justification: Less likely to roll a new AMI just for
>> ec2-prep.sh updates therefore more of a rolling update infrastructure.
>> One example is adding a new bridge protocol, all newly launched
>> instances would get the ec2-prep updates without rolling a new AMI.
>> Justification 2: Less Tor Cloud maintainer work. Downside: AMI has
>> dependency on gitweb.torproject.org's uptime - we could use S3 or some
>> other CDN but this is starting to go against the simple theme.
>> - Publishing AMI to us-east (N Virginia), us-west2 (Oregon), us-europe
>> (Ireland). Justification: These are the lowest cost regions.
>> - Not publishing a separate AMI for private bridges. Justification: IF
>> the Tor Cloud project wants to provide configuration for private
>> bridges, reusing the shared AMI makes more sense. I have an idea of
>> using cloud-init's user-data field but need to test it. Justification
>> 2: Simpler, less Tor Cloud maintainer work.
>> - Not addressed (yet): Ubuntu's unattended upgrades concept. I have
>> one idea (yum security plugin) but I haven't thought about all the
>> implications.
>>
>> My AMI testing has proved that the above concepts work and I have ec2
>> bridges running in east and west[5]. I propose that the TOR project
>> uses the above model and I'm willing to help facilitate that. I am
>> willing to share the AMI with some select beta testers but I'm not
>> ready to make it public yet (some changes are expected). It is my idea
>> that once the final AMI is produced, the Tor Cloud project will not
>> have to publish another AMI unless wanting to change the instance
>> type, or other "infrastructure" changes.
>>
>> I look forward to hearing some feedback, thanks,
>> Jeremy
>>
>> [1]: https://atlas.thecthulhu.com/#search/jolexa
>> [2]: https://lists.torproject.org/pipermail/tor-dev/2014-June/007001.html
>> [3]: https://trac.torproject.org/projects/tor/ticket/13391
>> [4]: https://github.com/jolexa/tor-cloud/blob/master/run-once.sh
>> [5]: https://globe.thecthulhu.com/#/search/query=ec2bridgei
>> _______________________________________________
>> tor-dev mailing list
>> tor-dev(a)lists.torproject.org
>> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
>
>
> _______________________________________________
> tor-dev mailing list
> tor-dev(a)lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
>
Hi Everyone, Happy New Year, first post to the -dev list but I've been
running some relays for months[1]. Overall a new user to Tor so feel
free to point me elsewhere if I'm asking poor questions.
I've noticed that the Tor Cloud project is dead in the water right
now. The last post on this list is in June 2014[2] and the bugs have
been neglected, especially the one I opened[3] which states that tor
does not even start! I've seen that there is a new maintainer, but
still don't see any [public] activity.
There are a couple of issues that have opportunity to be handled
better. A large roadblock seems to be building/publishing a new AMI so
I've addressed that in a keeping it simple theme:
- Use Amazon AMI instead of Ubuntu. Justification: more aligned to AWS
and is a first rate citizen in the ecosystem, yum repos, security
updates, etc
- Use t2.micro instances. Justification: t1.micro (currently in use)
are more expensive and less performant than t2.micro
- Use cloud-init to fetch ec2-prep.sh and run the script to configure
the instance[4]. Justification: Less likely to roll a new AMI just for
ec2-prep.sh updates therefore more of a rolling update infrastructure.
One example is adding a new bridge protocol, all newly launched
instances would get the ec2-prep updates without rolling a new AMI.
Justification 2: Less Tor Cloud maintainer work. Downside: AMI has
dependency on gitweb.torproject.org's uptime - we could use S3 or some
other CDN but this is starting to go against the simple theme.
- Publishing AMI to us-east (N Virginia), us-west2 (Oregon), us-europe
(Ireland). Justification: These are the lowest cost regions.
- Not publishing a separate AMI for private bridges. Justification: IF
the Tor Cloud project wants to provide configuration for private
bridges, reusing the shared AMI makes more sense. I have an idea of
using cloud-init's user-data field but need to test it. Justification
2: Simpler, less Tor Cloud maintainer work.
- Not addressed (yet): Ubuntu's unattended upgrades concept. I have
one idea (yum security plugin) but I haven't thought about all the
implications.
My AMI testing has proved that the above concepts work and I have ec2
bridges running in east and west[5]. I propose that the TOR project
uses the above model and I'm willing to help facilitate that. I am
willing to share the AMI with some select beta testers but I'm not
ready to make it public yet (some changes are expected). It is my idea
that once the final AMI is produced, the Tor Cloud project will not
have to publish another AMI unless wanting to change the instance
type, or other "infrastructure" changes.
I look forward to hearing some feedback, thanks,
Jeremy
[1]: https://atlas.thecthulhu.com/#search/jolexa
[2]: https://lists.torproject.org/pipermail/tor-dev/2014-June/007001.html
[3]: https://trac.torproject.org/projects/tor/ticket/13391
[4]: https://github.com/jolexa/tor-cloud/blob/master/run-once.sh
[5]: https://globe.thecthulhu.com/#/search/query=ec2bridgei
> From: Yawning Angel <yawning(a)schwanenlied.me>
> Subject: Re: [tor-dev] gettimeofday() Syscall Issues
>
> On Thu, 01 Jan 2015 23:42:42 -0500
> Libertas <libertas(a)mykolab.com> wrote:
>
>> The first two account for the bulk of the calls, as they are in the
>> core data relaying logic.
>>
>> Ultimately, the problem seems to be that the caching is very weak. At
>> most, only half of the calls to tor_gettimeofday_cached_monotonic()
>> use the cache. It appears in the vomiting print statements that
>> loading a single simple HTML page
>> (http://www.openbsd.org/faq/ports/guide.html to be exact) will cause
>>> 30 gettimeofday() syscalls. You can imagine how that would
>>> accumulate for an exit carrying 800 KB/s if the caching
>> doesn't improve much with additional circuits.
>
> So while optimization is cool and all, I'm not seeing why this
> specifically is the underlying issue.
>
> Each cell can contain 498 bytes of user payload. Looking at things
> simplistically this is 800 KiB/s -> 1644 cells/sec, leaving you with
> approximately 608 microseconds of processing time per cell.
>
> On my i5-4250U box, gettimeofday() takes 22 ns on Linux, and 2441 ns on
> FreeBSD. I'm not sure how accurate the FreeBSD results are as it was
> in a VirtualBox VM (getpid() on the same VM takes 124 ns). If someone
> has a OpenBSD box they should benchmark gettimeofday() and see how long
> the call takes.
>
> Taking the FreeBSD case (since we know that tor works fine on Linux), a
IPredator has complained that tor on Linux spends too much time calling time() when pushing 500Mbit/s, which is an issue for them under 3.x series kernels, but not kernel 2.6.
https://ipredator.se/guide/torserver#performance
> single gettimeofday() call takes approximately, 0.39% of the per-cell
> processing budget.
>
> For reference (assuming gettimeofday() in *BSD really is this shit
> performance wise), 7000 calls to gettimeofday() is 17.09 ms worth of
> calls.
>
> The clock code in tor does need love, so I wouldn't object to cleanup,
> but I'm not sure it's in the state where it's causing the massive
> performance degradation that you are seeing.
>
Yawning/Libertas,
I just reviewed my profiling of an exit relay running chutney verify with 200MB of random data.
This is on OS X 10.9.5 with tor 0.2.6.2-alpha-dev running the chutney basic-min network.
The three leaf functions that take the most time in the call graph are:
* channel_timestamp_recv
* channel_timestamp_active
* time
Each of these functions takes around 16% of the execution time, the next nearest function is sha1_block_data_order_avx on 4%.
While I understand that OS X, BSD, and Linux syscalls aren't necessarily identical, we now have results for the following platforms suggesting that calling time() too often has a performance impact:
* Linux kernel 3.x
* OpenBSD
* OS X 10.9
My results suggest a maximum performance improvement of 15% on OS X if we reduced the calls to time() to a reasonable number per second.
teor
pgp 0xABFED1AC
hkp://pgp.mit.edu/https://gist.github.com/teor2345/d033b8ce0a99adbc89c5http://0bin.net/paste/Mu92kPyphK0bqmbA#Zvt3gzMrSCAwDN6GKsUk7Q8G-eG+Y+BLpe7w…
The Second International Conference on Electrical, Electronics, Computer
Engineering and their Applications (EECEA2015)
World Trade Center, Manila Philippines
February 11, 2015
University of Perpetual Help System Dalta, Las Piñas - Manila,
Philippines
February 12-14, 2015
http://sdiwc.net/conferences/eecea2015/
All registered papers will be included in SDIWC Digital Library.
===========================================================
The conference aims to enable researchers build connections between
different digital applications. The event will be held over four days,
with presentations delivered by researchers from the international
community, including presentations from keynote speakers and
state-of-the-art lectures.
Trainings will be conducted by companies namely Microchip, Dassault
Systèmes and TechSource in parallel with the paper presentations where
delegates are encouraged to participate.
RESEARCH TOPICS ARE NOT LIMITED TO:
* Electronics Engineering
* Electrical Engineering
* Computer Engineering
SUBMISSION GUIDELINES:
- Researchers are encouraged to submit their work electronically.
- Full paper must be submitted (Abstracts are not acceptable).
- Submitted paper should not exceed 15 pages, including illustrations.
All papers must be without page numbers.
- Papers should be submitted electronically as pdf format without
author(s) name.
- Paper submission link:
http://sdiwc.net/conferences/eecea2015/openconf/openconf.php
IMPORTANT DATES:
Submission Deadline: January 12, 2015
Notification of Acceptance: 2-3 weeks from the date of submission
Camera Ready Submission: January 30, 2015
Registration Deadline: January 30, 2015
Seminar Date: February 11, 2015
Conference Dates: February 12-14, 2015
Libertas,
As I wrote to you earlier today, IPredator on Linux complained of similar issues with excessive calls to time() under the Linux 3.x kernel series, but not 2.6.
Let's track these issues in https://trac.torproject.org/projects/tor/ticket/14056
as they appear to be quite similar.
I'd like to get a sense of how many calls per second this represents.
(400,000 would seem to be 100 to 1000 per second, unless you were debugging for a long while.)
I'd also like to know which function(s)/call stack(s) these calls are being made from.
I wonder if TIME_IS_FAST is being defined as 1 in any of these Linux or BSD builds.
This
If TIME_IS_FAST is not defined (the default), we could change these calls to approx_time(), which is much faster.
And in that case there should only be around 1 call to time() each second, which is clearly not the behaviour that either Libertas or IPredator are seeing.
Libertas, can you search your tor binary or tor debug symbols for the function "approx_time"?
grep will do this for you, or strings, or perhaps gdb if you feel like doing it the hard way.
I'll add these questions to trac ticket #14056 as well.
Libertas wrote:
> I just ran ktrace/kdump (used for observing system calls) on the Tor
> process of my exit node, which relays about 800 KB/s. It listed >400,000
> calls to gettimeofday(). The list was swamped with them.
>
> I think I remember reading somewhere that that sort of system call is
> way slower in OpenBSD than Linux. Could this be related to the issue?
> I've found a lot of similarly mysterious slowdowns related to *BSD
> gettimeofday() on other projects' bug trackers, but nothing definitive.
>
> As you've likely noticed, although I started this discussion I'm very
> new to system-level performance tuning. Let me know if I'm not making
> sense, or if there's something else I should be focusing on.
>
>> <snip>
Is there a trick to writing to log files in windows 7?
The sample torrc file has this:
Log debug file @LOCALSTATEDIR@/log/tor/debug.log
I changed mine to this with no success:
Log debug file C:\tor\tor_debug.log