-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Hello Philipp and iwakeh, hello list,
Damian and I sat down yesterday at the dev meeting to talk about doing a comparison of the various descriptor-parsing libraries with respect to capabilities, run-time performance, memory usage, etc.
We put together a list of things we'd like to compare and tests we'd like to run that we thought we'd want to share with you. Damian and I will both be working on these for metrics-lib for a short while and then switch to Stem. Please feel free to join us in these effort. The result is supposed to live on Stem's home page unless somebody comes up with a better place.
Thanks!
All the best, Damian and Karsten
On 30/09/15 10:57, Karsten Loesing wrote:
- capabilities - supported descriptor types - all the ones on
CollecTor's formats.html - hidden service descriptors (have an agreed @type for that) - getting/producing descriptors - reading from file/directory - reading from tarballs - reading from CollecTor's .xz-compressed tarballs - fetching from CollecTor - downloading from directories (authorities or mirrors) - generating (for unit test) - recognizing @type annotation - inferencing from file name - keeping reading history - user documentation - validation (format, crypto, successful sanitization) - packages available - how much usage by (large) applications
- performance (CPU time, memory overhead) - compression:
.xz-compressed tarballs/decompressed tarballs/plain-text - descriptor type: consensus, server descriptor, extra-info descriptor, microdescriptors - validation: on or off (allows lazy loading)
- tests by descriptor type - @type server-descriptor 1.0 - Stem's
"List Outdated Relays" - average advertised bandwidth - fraction of relays that can exit to port 80 - @type extra-info 1.0 - sum of all written and read bytes from write-history/read-history - number of countries from which v3 requests were received - @type network-status-consensus-3 - average number of relays with Exit flag - @type network-status-vote-3 - Stem's "Votes by Bandwidth Authorities" - @type dir-key-certificate-3 - @type network-status-microdesc-consensus-3 1.0 - @type microdescriptor 1.0 - look at single microdesc cons and microdescs, compile list of extended families - fraction of relays that can exit to port 80
- @type network-status-2 1.0 - @type directory 1.0 - @type
bridge-network-status - @type bridge-server-descriptor - @type bridge-server-descriptor 1.0 - @type bridge-extra-info 1.3 - @type bridge-pool-assignment - @type tordnsel 1.0 - @type torperf 1.0
- action items - get in touch with Dererk for packaging
metrics-lib for Debian
Damian and I sat down yesterday at the dev meeting to talk about doing a comparison of the various descriptor-parsing libraries with respect to capabilities, run-time performance, memory usage, etc.
Hi Karsten, started moving this forward with the easy bit: a table comparing capabilities. Mind taking a peek? Philipp, is this accurate for Zoossh?
https://stem.torproject.org/tutorials/mirror_mirror_on_the_wall.html#are-the...
Cheers! -Damian
On Sun, Oct 18, 2015 at 02:50:47PM -0700, Damian Johnson wrote:
Damian and I sat down yesterday at the dev meeting to talk about doing a comparison of the various descriptor-parsing libraries with respect to capabilities, run-time performance, memory usage, etc.
Hi Karsten, started moving this forward with the easy bit: a table comparing capabilities. Mind taking a peek? Philipp, is this accurate for Zoossh?
https://stem.torproject.org/tutorials/mirror_mirror_on_the_wall.html#are-the...
The only thing that's wrong is that zoossh can detect types by looking at @type.
Cheers, Philipp
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 18/10/15 23:50, Damian Johnson wrote:
Damian and I sat down yesterday at the dev meeting to talk about doing a comparison of the various descriptor-parsing libraries with respect to capabilities, run-time performance, memory usage, etc.
Hi Karsten, started moving this forward with the easy bit: a table comparing capabilities. Mind taking a peek? Philipp, is this accurate for Zoossh?
https://stem.torproject.org/tutorials/mirror_mirror_on_the_wall.html#are-the...
Looks
good for metrics-lib. Thanks for starting this!
All the best, Karsten
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Hi Damian,
I'm digging out this old thread, because I think it's still relevant.
I started writing some performance evaluations for metrics-lib and got some early results. All examples read a monthly tarball from CollecTor and do something trivial with each contained descriptor that requires parsing them. Here are the average processing times by type:
server-descriptors-2015-11.tar.xz: 0.334261 ms server-descriptors-2015-11.tar: 0.285430 ms extra-infos-2015-11.tar.xz: 0.274610 ms extra-infos-2015-11.tar: 0.215500 ms consensuses-2015-11.tar.xz: 255.760446 ms consensuses-2015-11.tar: 246.713092 ms microdescs-2015-11.tar.xz[*]: 0.099397 ms microdescs-2015-11.tar[*]: 0.066566 ms
[*] The microdescs* tarballs contain microdesc consensuses and microdescriptors, but I only cared about the latter; what I did is extract tarballs, delete microdesc consensuses, and re-create and re-compress tarballs
These evaluations were all run on a Core i7 with 2GHz using an SSD as storage.
Any surprises in these results so far?
Would you want to move forward with the comparison and also include Stem? (And, Philipp, would you want to include Zoossh?)
All the best, Karsten
On 01/10/15 09:28, Karsten Loesing wrote:
Hello Philipp and iwakeh, hello list,
Damian and I sat down yesterday at the dev meeting to talk about doing a comparison of the various descriptor-parsing libraries with respect to capabilities, run-time performance, memory usage, etc.
We put together a list of things we'd like to compare and tests we'd like to run that we thought we'd want to share with you. Damian and I will both be working on these for metrics-lib for a short while and then switch to Stem. Please feel free to join us in these effort. The result is supposed to live on Stem's home page unless somebody comes up with a better place.
Thanks!
All the best, Damian and Karsten
On 30/09/15 10:57, Karsten Loesing wrote:
- capabilities - supported descriptor types - all the ones on
CollecTor's formats.html - hidden service descriptors (have an agreed @type for that) - getting/producing descriptors - reading from file/directory - reading from tarballs - reading from CollecTor's .xz-compressed tarballs - fetching from CollecTor - downloading from directories (authorities or mirrors) - generating (for unit test) - recognizing @type annotation - inferencing from file name - keeping reading history - user documentation - validation (format, crypto, successful sanitization) - packages available - how much usage by (large) applications
- performance (CPU time, memory overhead) - compression:
.xz-compressed tarballs/decompressed tarballs/plain-text - descriptor type: consensus, server descriptor, extra-info descriptor, microdescriptors - validation: on or off (allows lazy loading)
- tests by descriptor type - @type server-descriptor 1.0 -
Stem's "List Outdated Relays" - average advertised bandwidth - fraction of relays that can exit to port 80 - @type extra-info 1.0 - sum of all written and read bytes from write-history/read-history - number of countries from which v3 requests were received - @type network-status-consensus-3 - average number of relays with Exit flag - @type network-status-vote-3 - Stem's "Votes by Bandwidth Authorities" - @type dir-key-certificate-3 - @type network-status-microdesc-consensus-3 1.0 - @type microdescriptor 1.0 - look at single microdesc cons and microdescs, compile list of extended families - fraction of relays that can exit to port 80 - @type network-status-2 1.0 - @type directory 1.0 - @type bridge-network-status - @type bridge-server-descriptor - @type bridge-server-descriptor 1.0 - @type bridge-extra-info 1.3 - @type bridge-pool-assignment - @type tordnsel 1.0 - @type torperf 1.0
- action items - get in touch with Dererk for packaging
metrics-lib for Debian
Nice! Few questions...
* Where are your metrics-lib scripts used for the benchmarks? Should be easy for me to write stem counterparts once I know what we're running. I'll later be including our demo scripts with the benchmarks later so if possible comments would be nice so they're good examples for newcomers to our libraries.
* Which exact tarballs are you parsing? It would be useful if we ran all our benchmarks on the same host with the same data.
* Please take note somewhere of the metric-lib commit id used since I'll want to include that later when we add the results to our site.
Sorry I didn't get to this for the task exchange. Been focusing on Nyx so quite a few things have fallen off my radar.
Cheers! -Damian
On Sun, Jan 3, 2016 at 9:56 AM, Karsten Loesing karsten@torproject.org wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Hi Damian,
I'm digging out this old thread, because I think it's still relevant.
I started writing some performance evaluations for metrics-lib and got some early results. All examples read a monthly tarball from CollecTor and do something trivial with each contained descriptor that requires parsing them. Here are the average processing times by type:
server-descriptors-2015-11.tar.xz: 0.334261 ms server-descriptors-2015-11.tar: 0.285430 ms extra-infos-2015-11.tar.xz: 0.274610 ms extra-infos-2015-11.tar: 0.215500 ms consensuses-2015-11.tar.xz: 255.760446 ms consensuses-2015-11.tar: 246.713092 ms microdescs-2015-11.tar.xz[*]: 0.099397 ms microdescs-2015-11.tar[*]: 0.066566 ms
[*] The microdescs* tarballs contain microdesc consensuses and microdescriptors, but I only cared about the latter; what I did is extract tarballs, delete microdesc consensuses, and re-create and re-compress tarballs
These evaluations were all run on a Core i7 with 2GHz using an SSD as storage.
Any surprises in these results so far?
Would you want to move forward with the comparison and also include Stem? (And, Philipp, would you want to include Zoossh?)
All the best, Karsten
On 01/10/15 09:28, Karsten Loesing wrote:
Hello Philipp and iwakeh, hello list,
Damian and I sat down yesterday at the dev meeting to talk about doing a comparison of the various descriptor-parsing libraries with respect to capabilities, run-time performance, memory usage, etc.
We put together a list of things we'd like to compare and tests we'd like to run that we thought we'd want to share with you. Damian and I will both be working on these for metrics-lib for a short while and then switch to Stem. Please feel free to join us in these effort. The result is supposed to live on Stem's home page unless somebody comes up with a better place.
Thanks!
All the best, Damian and Karsten
On 30/09/15 10:57, Karsten Loesing wrote:
- capabilities - supported descriptor types - all the ones on
CollecTor's formats.html - hidden service descriptors (have an agreed @type for that) - getting/producing descriptors - reading from file/directory - reading from tarballs - reading from CollecTor's .xz-compressed tarballs - fetching from CollecTor - downloading from directories (authorities or mirrors) - generating (for unit test) - recognizing @type annotation - inferencing from file name - keeping reading history - user documentation - validation (format, crypto, successful sanitization) - packages available - how much usage by (large) applications
- performance (CPU time, memory overhead) - compression:
.xz-compressed tarballs/decompressed tarballs/plain-text - descriptor type: consensus, server descriptor, extra-info descriptor, microdescriptors - validation: on or off (allows lazy loading)
- tests by descriptor type - @type server-descriptor 1.0 -
Stem's "List Outdated Relays" - average advertised bandwidth - fraction of relays that can exit to port 80 - @type extra-info 1.0 - sum of all written and read bytes from write-history/read-history - number of countries from which v3 requests were received - @type network-status-consensus-3 - average number of relays with Exit flag - @type network-status-vote-3 - Stem's "Votes by Bandwidth Authorities" - @type dir-key-certificate-3 - @type network-status-microdesc-consensus-3 1.0 - @type microdescriptor 1.0 - look at single microdesc cons and microdescs, compile list of extended families - fraction of relays that can exit to port 80 - @type network-status-2 1.0 - @type directory 1.0 - @type bridge-network-status - @type bridge-server-descriptor - @type bridge-server-descriptor 1.0 - @type bridge-extra-info 1.3 - @type bridge-pool-assignment - @type tordnsel 1.0 - @type torperf 1.0
- action items - get in touch with Dererk for packaging
metrics-lib for Debian
-----BEGIN PGP SIGNATURE----- Comment: GPGTools - http://gpgtools.org
iQEcBAEBAgAGBQJWiWC6AAoJEJD5dJfVqbCrFVYH/jrFdEILBsBHT8EBGEeT7IMR 2fojbeA5X8OSFcZlKgLQjy17HEOLdNJwG9jkLYCZ/O1/kcPkYSn69v7YYsO4Ouo0 8z/RsWIBHH5i+mOliWGfXovTRaiNofWhLy9GEmVYRNQOEjcv16155tZSo9ihYz+V KHIWniThqRe5ASWSYgf9G6C73VVNl9aUUbNL/W9JRjqXfyf9ser6sJJ1T52YnSu8 3+MUuXfWR1H4buiqZc/EK2cxSJD2aGDi6xIVUq7eWB2yNy452LS4m+TMR0XBTonA ibpl/FoDp77TeV3Pc53drlx05zIYoMB7n5NQlRGUANXoRo2OSY/DjAt50lG/jww= =pcyw -----END PGP SIGNATURE-----
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 03/01/16 21:25, Damian Johnson wrote:
Nice! Few questions...
- Where are your metrics-lib scripts used for the benchmarks?
Should be easy for me to write stem counterparts once I know what we're running. I'll later be including our demo scripts with the benchmarks later so if possible comments would be nice so they're good examples for newcomers to our libraries.
I'm planning to clean up this code before committing it to a real repository, but here's the unclean version in a pastebin:
- Which exact tarballs are you parsing? It would be useful if we
ran all our benchmarks on the same host with the same data.
I'm using tarballs from CollecTor, except for microdescriptors which I'm processing as described below.
Agreed about running this on the same host in the future.
- Please take note somewhere of the metric-lib commit id used
since I'll want to include that later when we add the results to our site.
Good idea.
For now, I think I'll wait for you to write similar benchmarks for Stem to learn whether I need to write any more for metrics-lib. And then I'll clean up things more on my side and commit them somewhere more serious than pastebin.
Sorry I didn't get to this for the task exchange. Been focusing on Nyx so quite a few things have fallen off my radar.
Sure, no worries at all. Very much looking forward to your results!
All the best, Karsten
Cheers! -Damian
On Sun, Jan 3, 2016 at 9:56 AM, Karsten Loesing karsten@torproject.org wrote: Hi Damian,
I'm digging out this old thread, because I think it's still relevant.
I started writing some performance evaluations for metrics-lib and got some early results. All examples read a monthly tarball from CollecTor and do something trivial with each contained descriptor that requires parsing them. Here are the average processing times by type:
server-descriptors-2015-11.tar.xz: 0.334261 ms server-descriptors-2015-11.tar: 0.285430 ms extra-infos-2015-11.tar.xz: 0.274610 ms extra-infos-2015-11.tar: 0.215500 ms consensuses-2015-11.tar.xz: 255.760446 ms consensuses-2015-11.tar: 246.713092 ms microdescs-2015-11.tar.xz[*]: 0.099397 ms microdescs-2015-11.tar[*]: 0.066566 ms
[*] The microdescs* tarballs contain microdesc consensuses and microdescriptors, but I only cared about the latter; what I did is extract tarballs, delete microdesc consensuses, and re-create and re-compress tarballs
These evaluations were all run on a Core i7 with 2GHz using an SSD as storage.
Any surprises in these results so far?
Would you want to move forward with the comparison and also include Stem? (And, Philipp, would you want to include Zoossh?)
All the best, Karsten
On 01/10/15 09:28, Karsten Loesing wrote:
Hello Philipp and iwakeh, hello list,
Damian and I sat down yesterday at the dev meeting to talk about doing a comparison of the various descriptor-parsing libraries with respect to capabilities, run-time performance, memory usage, etc.
We put together a list of things we'd like to compare and tests we'd like to run that we thought we'd want to share with you. Damian and I will both be working on these for metrics-lib for a short while and then switch to Stem. Please feel free to join us in these effort. The result is supposed to live on Stem's home page unless somebody comes up with a better place.
Thanks!
All the best, Damian and Karsten
On 30/09/15 10:57, Karsten Loesing wrote:
- capabilities - supported descriptor types - all the ones
on CollecTor's formats.html - hidden service descriptors (have an agreed @type for that) - getting/producing descriptors - reading from file/directory - reading from tarballs - reading from CollecTor's .xz-compressed tarballs
- fetching from CollecTor - downloading from directories
(authorities or mirrors) - generating (for unit test) - recognizing @type annotation - inferencing from file name - keeping reading history - user documentation - validation (format, crypto, successful sanitization) - packages available - how much usage by (large) applications
- performance (CPU time, memory overhead) - compression:
.xz-compressed tarballs/decompressed tarballs/plain-text - descriptor type: consensus, server descriptor, extra-info descriptor, microdescriptors - validation: on or off (allows lazy loading)
- tests by descriptor type - @type server-descriptor 1.0
- Stem's "List Outdated Relays" - average advertised
bandwidth - fraction of relays that can exit to port 80 - @type extra-info 1.0 - sum of all written and read bytes from write-history/read-history - number of countries from which v3 requests were received - @type network-status-consensus-3 - average number of relays with Exit flag - @type network-status-vote-3 - Stem's "Votes by Bandwidth Authorities" - @type dir-key-certificate-3 - @type network-status-microdesc-consensus-3 1.0 - @type microdescriptor 1.0 - look at single microdesc cons and microdescs, compile list of extended families - fraction of relays that can exit to port 80 - @type network-status-2 1.0 - @type directory 1.0 - @type bridge-network-status - @type bridge-server-descriptor - @type bridge-server-descriptor 1.0 - @type bridge-extra-info 1.3
- @type bridge-pool-assignment - @type tordnsel 1.0 - @type
torperf 1.0
- action items - get in touch with Dererk for packaging
metrics-lib for Debian
Hi Karsten, implemented Stem counterparts of these (see attached). On one hand the code is delightfully simple, but on the other measurements I got were quite a bit slower. Curious to see what you get when running at the same place you took your measurements.
Cheers! -Damian
On Thu, Jan 7, 2016 at 8:02 AM, Karsten Loesing karsten@torproject.org wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 03/01/16 21:25, Damian Johnson wrote:
Nice! Few questions...
- Where are your metrics-lib scripts used for the benchmarks?
Should be easy for me to write stem counterparts once I know what we're running. I'll later be including our demo scripts with the benchmarks later so if possible comments would be nice so they're good examples for newcomers to our libraries.
I'm planning to clean up this code before committing it to a real repository, but here's the unclean version in a pastebin:
- Which exact tarballs are you parsing? It would be useful if we
ran all our benchmarks on the same host with the same data.
I'm using tarballs from CollecTor, except for microdescriptors which I'm processing as described below.
Agreed about running this on the same host in the future.
- Please take note somewhere of the metric-lib commit id used
since I'll want to include that later when we add the results to our site.
Good idea.
For now, I think I'll wait for you to write similar benchmarks for Stem to learn whether I need to write any more for metrics-lib. And then I'll clean up things more on my side and commit them somewhere more serious than pastebin.
Sorry I didn't get to this for the task exchange. Been focusing on Nyx so quite a few things have fallen off my radar.
Sure, no worries at all. Very much looking forward to your results!
All the best, Karsten
Cheers! -Damian
On Sun, Jan 3, 2016 at 9:56 AM, Karsten Loesing karsten@torproject.org wrote: Hi Damian,
I'm digging out this old thread, because I think it's still relevant.
I started writing some performance evaluations for metrics-lib and got some early results. All examples read a monthly tarball from CollecTor and do something trivial with each contained descriptor that requires parsing them. Here are the average processing times by type:
server-descriptors-2015-11.tar.xz: 0.334261 ms server-descriptors-2015-11.tar: 0.285430 ms extra-infos-2015-11.tar.xz: 0.274610 ms extra-infos-2015-11.tar: 0.215500 ms consensuses-2015-11.tar.xz: 255.760446 ms consensuses-2015-11.tar: 246.713092 ms microdescs-2015-11.tar.xz[*]: 0.099397 ms microdescs-2015-11.tar[*]: 0.066566 ms
[*] The microdescs* tarballs contain microdesc consensuses and microdescriptors, but I only cared about the latter; what I did is extract tarballs, delete microdesc consensuses, and re-create and re-compress tarballs
These evaluations were all run on a Core i7 with 2GHz using an SSD as storage.
Any surprises in these results so far?
Would you want to move forward with the comparison and also include Stem? (And, Philipp, would you want to include Zoossh?)
All the best, Karsten
On 01/10/15 09:28, Karsten Loesing wrote:
Hello Philipp and iwakeh, hello list,
Damian and I sat down yesterday at the dev meeting to talk about doing a comparison of the various descriptor-parsing libraries with respect to capabilities, run-time performance, memory usage, etc.
We put together a list of things we'd like to compare and tests we'd like to run that we thought we'd want to share with you. Damian and I will both be working on these for metrics-lib for a short while and then switch to Stem. Please feel free to join us in these effort. The result is supposed to live on Stem's home page unless somebody comes up with a better place.
Thanks!
All the best, Damian and Karsten
On 30/09/15 10:57, Karsten Loesing wrote:
- capabilities - supported descriptor types - all the ones
on CollecTor's formats.html - hidden service descriptors (have an agreed @type for that) - getting/producing descriptors - reading from file/directory - reading from tarballs - reading from CollecTor's .xz-compressed tarballs
- fetching from CollecTor - downloading from directories
(authorities or mirrors) - generating (for unit test) - recognizing @type annotation - inferencing from file name - keeping reading history - user documentation - validation (format, crypto, successful sanitization) - packages available - how much usage by (large) applications
- performance (CPU time, memory overhead) - compression:
.xz-compressed tarballs/decompressed tarballs/plain-text - descriptor type: consensus, server descriptor, extra-info descriptor, microdescriptors - validation: on or off (allows lazy loading)
- tests by descriptor type - @type server-descriptor 1.0
- Stem's "List Outdated Relays" - average advertised
bandwidth - fraction of relays that can exit to port 80 - @type extra-info 1.0 - sum of all written and read bytes from write-history/read-history - number of countries from which v3 requests were received - @type network-status-consensus-3 - average number of relays with Exit flag - @type network-status-vote-3 - Stem's "Votes by Bandwidth Authorities" - @type dir-key-certificate-3 - @type network-status-microdesc-consensus-3 1.0 - @type microdescriptor 1.0 - look at single microdesc cons and microdescs, compile list of extended families - fraction of relays that can exit to port 80 - @type network-status-2 1.0 - @type directory 1.0 - @type bridge-network-status - @type bridge-server-descriptor - @type bridge-server-descriptor 1.0 - @type bridge-extra-info 1.3
- @type bridge-pool-assignment - @type tordnsel 1.0 - @type
torperf 1.0
- action items - get in touch with Dererk for packaging
metrics-lib for Debian
-----BEGIN PGP SIGNATURE----- Comment: GPGTools - http://gpgtools.org
iQEcBAEBAgAGBQJWjowbAAoJEJD5dJfVqbCr3CwH/iQK7Pj1SqhnjESa2uzGFklZ j1qNeS0P2VLbm3HcFZgZd7SOyY90tkYNWAcSxlMlbiLnGyn/eVKkOtlzFyqhiW6O y1LF/1Q5a823QFgE0x3y9NcvXmB0LUxnPMzwrWNBVuFtL/HWqF4jfnflNO9HDiCx GQi6GtMp+iPwOJgk6lqVpsJ28UqewymxaXHkhA99IKFNOAyMApf+F/vu+HLibQek P1b4fUxVPai8a/TB1bg8+Aj6EDB1PvxOBVplneV9tJsDmG8lLZCj/vb4Ko8jiCZ5 qLndkF0WdraOR82PDeeaf6n62ca/CDxIOWQ3F6Paxa9ZmSiBdAqymBEnIwo7Gfw= =PbRy -----END PGP SIGNATURE-----
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 12/01/16 05:38, Damian Johnson wrote:
Hi Karsten, implemented Stem counterparts of these (see attached). On one hand the code is delightfully simple, but on the other measurements I got were quite a bit slower. Curious to see what you get when running at the same place you took your measurements.
Cool! Here's a comparison between metrics-lib and Stem run on the same system:
server-descriptors-2015-11.tar: - metrics-lib: 0.285430 ms - Stem: 1.02 ms (357%)
extra-infos-2015-11.tar: - metrics-lib: 0.215500 ms - Stem: 0.68 ms (316%)
consensuses-2015-11.tar: - metrics-lib: 246.713092 ms - Stem: 1393.10 ms (565%)
microdescs-2015-11.tar: - metrics-lib: 0.066566 ms - Stem: 0.66 ms (991%)
Do these Stem results look plausible?
Philipp, would you be able to write the Zoossh counterpart for the descriptor types supported by it? I'm even more curious now how those numbers compare to metrics-lib and Stem.
All the best, Karsten
Thanks! Yup, those results look reasonable. I was expecting a smaller delta with server/extrainfo descriptors and larger one with microdescriptors due to the lazy loading but oh well. What stem commit and python version was this with?
Any thoughts on when you'll have time to clean up the metrics-lib examples? Happy to add the results to our site when they're ready.
Cheers! -Damian
On Tue, Jan 12, 2016 at 12:40 AM, Karsten Loesing karsten@torproject.org wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 12/01/16 05:38, Damian Johnson wrote:
Hi Karsten, implemented Stem counterparts of these (see attached). On one hand the code is delightfully simple, but on the other measurements I got were quite a bit slower. Curious to see what you get when running at the same place you took your measurements.
Cool! Here's a comparison between metrics-lib and Stem run on the same system:
server-descriptors-2015-11.tar:
- metrics-lib: 0.285430 ms
- Stem: 1.02 ms (357%)
extra-infos-2015-11.tar:
- metrics-lib: 0.215500 ms
- Stem: 0.68 ms (316%)
consensuses-2015-11.tar:
- metrics-lib: 246.713092 ms
- Stem: 1393.10 ms (565%)
microdescs-2015-11.tar:
- metrics-lib: 0.066566 ms
- Stem: 0.66 ms (991%)
Do these Stem results look plausible?
Philipp, would you be able to write the Zoossh counterpart for the descriptor types supported by it? I'm even more curious now how those numbers compare to metrics-lib and Stem.
All the best, Karsten -----BEGIN PGP SIGNATURE----- Comment: GPGTools - http://gpgtools.org
iQEcBAEBAgAGBQJWlLwCAAoJEJD5dJfVqbCr9y0H/i43rgx14dJXxxzUOnFLgUpm THfWdWoWZQo1P5mQx5WZctUAjk1OBTGUX+EyVZzwOoLjaZHKXX8Nu14et1kz0CSM K1qs0bDALgwXpVg5Rm79kIUanm0hcptWAlKkyqAu1X2Qfu/2fVKfCEvrDZJlVoCC ePuMLLxvRim0fBT4K1aRKRdxwob4HKv90N+4Ib+Llw0CLy8DQFJjKKDpmruxxcuq 62KzWz2Ck2AiJUPr2BYMxFMozyE40lQ38RcP6rbomF2PxqxP2W5pjs9R0jfzcRA5 o28okykuFqaIp82rnuvmUXbYrYvPV/2GAqrsKAj+rHon/xeGDWsYZG/FOzIhZsU= =soLz -----END PGP SIGNATURE-----
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 12/01/16 17:19, Damian Johnson wrote:
Thanks! Yup, those results look reasonable. I was expecting a smaller delta with server/extrainfo descriptors and larger one with microdescriptors due to the lazy loading but oh well. What stem commit and python version was this with?
This was Stem commit c01a9cda4e7699c7f4bd642c8e81ed45aab7a29b and Python version 2.7.10.
Any thoughts on when you'll have time to clean up the metrics-lib examples? Happy to add the results to our site when they're ready.
Or should we add these performance tests for metrics-lib, Stem, and Zoossh to their own repository that also comes with scripts to fetch data from CollecTor? (Not sure if this is a smart thing to do, but I figured I should ask before adding things to the metrics-lib repository.)
All the best, Karsten
This was Stem commit c01a9cda4e7699c7f4bd642c8e81ed45aab7a29b and Python version 2.7.10.
Great, thanks! Also what was the metrics-lib and zoossh commits?
Or should we add these performance tests for metrics-lib, Stem, and Zoossh to their own repository that also comes with scripts to fetch data from CollecTor? (Not sure if this is a smart thing to do, but I figured I should ask before adding things to the metrics-lib repository.)
Sinister plan #572 is that I'll add these results and scripts to the page we have with the library comparison. Probably tomorrow. If you'd care to clean up the metrics-lib examples that would be great. Otherwise I'll include what we have and you can send me patches later with what you'd like.
Cheers! -Damian
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 13/01/16 20:42, Damian Johnson wrote:
This was Stem commit c01a9cda4e7699c7f4bd642c8e81ed45aab7a29b and Python version 2.7.10.
Great, thanks! Also what was the metrics-lib and zoossh commits?
metrics-lib: 8767f3e3bb8f6c9aa8cdb4c9fb0e9f2b545a7501 java version "1.7.0_51"
Zoossh: 2380e557e35532fd25870c8fc7a84a3fc951dbfc go version go1.5.2 darwin/amd64
Or should we add these performance tests for metrics-lib, Stem, and Zoossh to their own repository that also comes with scripts to fetch data from CollecTor? (Not sure if this is a smart thing to do, but I figured I should ask before adding things to the metrics-lib repository.)
Sinister plan #572 is that I'll add these results and scripts to the page we have with the library comparison. Probably tomorrow. If you'd care to clean up the metrics-lib examples that would be great. Otherwise I'll include what we have and you can send me patches later with what you'd like.
Sounds like a good plan. Thanks for doing this!
Here's one thing I realized when composing this message. When I pasted Zoossh results earlier, I compared them to the results for metrics-lib and Stem processing tarballs. But Zoossh can only process extracted tarballs. I just re-ran metrics-lib and Stem with extracted tarballs and included all results below:
server-descriptors-2015-11.tar.xz: - metrics-lib: 0.334261 ms
server-descriptors-2015-11.tar: - metrics-lib: 0.285430 ms - Stem: 1.02 ms (357%)
server-descriptors-2015-11/: - metrics-lib: 0.682293 ms - Zoossh: 0.458566 ms (67%) - Stem: 1.11 ms (163%)
extra-infos-2015-11.tar.xz: - metrics-lib: 0.274610 ms
extra-infos-2015-11.tar: - metrics-lib: 0.215500 ms - Stem: 0.68 ms (316%)
consensuses-2015-11.tar.xz: - metrics-lib: 255.760446 ms
consensuses-2015-11.tar: - metrics-lib: 246.713092 ms - Stem: 1393.10 ms (565%)
consensuses-2015-11/: - metrics-lib: 283.910864 ms - Stem: 1303.53 ms - Zoossh: 83 ms
microdescs-2015-11.tar.xz[*]: - metrics-lib: 0.099397 ms
microdescs-2015-11.tar[*]: - metrics-lib: 0.066566 ms - Stem: 0.66 ms (991%)
[*] The microdescs* tarballs contain microdesc consensuses and microdescriptors, but I only cared about the latter; what I did is extract tarballs, delete microdesc consensuses, and re-create and re-compress tarballs
I'm attaching a slightly updated version of the metrics-lib code. It's not cleaned up, but it's what I used to perform the measurements above.
All the best, Karsten
Oh, forgot to talk about compression. You can run the stem script against compressed tarballs but python didn't add lzma support until python 3.3...
https://stem.torproject.org/faq.html#how-do-i-read-tar-xz-descriptor-archive...
I suppose we could run over bz2 or gz tarballs, or upgrade python. But can't say the compressed benchmark is overly important.
Cheers! -Damian
On Thu, Jan 14, 2016 at 2:23 AM, Karsten Loesing karsten@torproject.org wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 13/01/16 20:42, Damian Johnson wrote:
This was Stem commit c01a9cda4e7699c7f4bd642c8e81ed45aab7a29b and Python version 2.7.10.
Great, thanks! Also what was the metrics-lib and zoossh commits?
metrics-lib: 8767f3e3bb8f6c9aa8cdb4c9fb0e9f2b545a7501 java version "1.7.0_51"
Zoossh: 2380e557e35532fd25870c8fc7a84a3fc951dbfc go version go1.5.2 darwin/amd64
Or should we add these performance tests for metrics-lib, Stem, and Zoossh to their own repository that also comes with scripts to fetch data from CollecTor? (Not sure if this is a smart thing to do, but I figured I should ask before adding things to the metrics-lib repository.)
Sinister plan #572 is that I'll add these results and scripts to the page we have with the library comparison. Probably tomorrow. If you'd care to clean up the metrics-lib examples that would be great. Otherwise I'll include what we have and you can send me patches later with what you'd like.
Sounds like a good plan. Thanks for doing this!
Here's one thing I realized when composing this message. When I pasted Zoossh results earlier, I compared them to the results for metrics-lib and Stem processing tarballs. But Zoossh can only process extracted tarballs. I just re-ran metrics-lib and Stem with extracted tarballs and included all results below:
server-descriptors-2015-11.tar.xz:
- metrics-lib: 0.334261 ms
server-descriptors-2015-11.tar:
- metrics-lib: 0.285430 ms
- Stem: 1.02 ms (357%)
server-descriptors-2015-11/:
- metrics-lib: 0.682293 ms
- Zoossh: 0.458566 ms (67%)
- Stem: 1.11 ms (163%)
extra-infos-2015-11.tar.xz:
- metrics-lib: 0.274610 ms
extra-infos-2015-11.tar:
- metrics-lib: 0.215500 ms
- Stem: 0.68 ms (316%)
consensuses-2015-11.tar.xz:
- metrics-lib: 255.760446 ms
consensuses-2015-11.tar:
- metrics-lib: 246.713092 ms
- Stem: 1393.10 ms (565%)
consensuses-2015-11/:
- metrics-lib: 283.910864 ms
- Stem: 1303.53 ms
- Zoossh: 83 ms
microdescs-2015-11.tar.xz[*]:
- metrics-lib: 0.099397 ms
microdescs-2015-11.tar[*]:
- metrics-lib: 0.066566 ms
- Stem: 0.66 ms (991%)
[*] The microdescs* tarballs contain microdesc consensuses and microdescriptors, but I only cared about the latter; what I did is extract tarballs, delete microdesc consensuses, and re-create and re-compress tarballs
I'm attaching a slightly updated version of the metrics-lib code. It's not cleaned up, but it's what I used to perform the measurements above.
All the best, Karsten
-----BEGIN PGP SIGNATURE----- Comment: GPGTools - http://gpgtools.org
iQEcBAEBAgAGBQJWl3cmAAoJEJD5dJfVqbCrew8H/1JnOCbZAXD9dBQUIYsZPu// jngO1Khf2lcggcHcxNMkhZS67KMmdKnyu0ZdjBHGdRNLZbQhhbDf5kQVtdx4xGim h4+cMgNEWqVp2gvfr/pvF6luns1rXfKREN/uxs9B1zrk5nEqWjAuPWqdgFbqA3Ui 03VOd1inUNZ7fWFwJpvSzb5RuZHusmLAWsDXO/607cJ/Of99QUIU5NWEBwwVTvgl ATd1+Slo2KIsNSpVPgtIbv345X7kTs2Jvt/ZvJsotQuRzn18d0A2ZayCddpQ896Z gffIp+xY+19tr1x27bwbDS44Jhb3p1Y1cKqvAUjZXzk8iSecxL+kSn5JCEdl2YY= =7nr/ -----END PGP SIGNATURE-----
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 14/01/16 17:22, Damian Johnson wrote:
Oh, forgot to talk about compression. You can run the stem script against compressed tarballs but python didn't add lzma support until python 3.3...
https://stem.torproject.org/faq.html#how-do-i-read-tar-xz-descriptor-archive...
I suppose we could run over bz2 or gz tarballs, or upgrade python. But can't say the compressed benchmark is overly important.
I just ran all the Stem measurements using Python 3, which now includes xz tarballs. The table below contains all results:
server-descriptors-2015-11.tar.xz: - metrics-lib: 0.334261 ms - Stem[**]: 0.63 ms (188%)
server-descriptors-2015-11.tar: - metrics-lib: 0.28543 ms - Stem: 1.02 ms (357%) - Stem[**]: 0.63 ms (221%)
server-descriptors-2015-11/: - metrics-lib: 0.682293 ms - Stem: 1.11 ms (163%) - Stem[**]: 1.03 ms (151%) - Zoossh: 0.458566 ms (67%)
extra-infos-2015-11.tar.xz: - metrics-lib: 0.274610 ms - Stem[**]: 0.46 ms (168%)
extra-infos-2015-11.tar: - metrics-lib: 0.2155 ms - Stem: 0.68 ms (316%) - Stem[**]: 0.42 ms (195%)
consensuses-2015-11.tar.xz: - metrics-lib: 255.760446 ms - Stem[**]: 913.12 ms (357%)
consensuses-2015-11.tar: - metrics-lib: 246.713092 ms - Stem: 1393.10 ms (565%) - Stem[**]: 876.09 ms (355%)
consensuses-2015-11/: - metrics-lib: 283.910864 ms - Stem: 1303.53 ms (459%) - Stem[**]: 873.45 ms (308%) - Zoossh: 83 ms (29%)
microdescs-2015-11.tar.xz[*]: - metrics-lib: 0.099397 ms - Stem[**]: 0.33 ms (332%)
microdescs-2015-11.tar[*]: - metrics-lib: 0.066566 ms - Stem: 0.66 ms (991%) - Stem[**]: 0.34 ms (511%)
[*] The microdescs* tarballs contain microdesc consensuses and microdescriptors, but I only cared about the latter; what I did is extract tarballs, delete microdesc consensuses, and re-create and re-compress tarballs
[**] Run with Python 3.5.1
Is Python 3 really that much faster than Python 2? Should we just omit Python 2 results from this comparison?
All the best, Karsten
Yikes, thanks for getting these Karsten! I don't think we should omit the earlier results since the python community is still very much split between 2.7 and 3.x. I'll include both so users know they can upgrade their interpreter to get a nice little speed boost.
Thanks!
On Fri, Jan 15, 2016 at 5:43 AM, Karsten Loesing karsten@torproject.org wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 14/01/16 17:22, Damian Johnson wrote:
Oh, forgot to talk about compression. You can run the stem script against compressed tarballs but python didn't add lzma support until python 3.3...
https://stem.torproject.org/faq.html#how-do-i-read-tar-xz-descriptor-archive...
I suppose we could run over bz2 or gz tarballs, or upgrade python. But can't say the compressed benchmark is overly important.
I just ran all the Stem measurements using Python 3, which now includes xz tarballs. The table below contains all results:
server-descriptors-2015-11.tar.xz:
- metrics-lib: 0.334261 ms
- Stem[**]: 0.63 ms (188%)
server-descriptors-2015-11.tar:
- metrics-lib: 0.28543 ms
- Stem: 1.02 ms (357%)
- Stem[**]: 0.63 ms (221%)
server-descriptors-2015-11/:
- metrics-lib: 0.682293 ms
- Stem: 1.11 ms (163%)
- Stem[**]: 1.03 ms (151%)
- Zoossh: 0.458566 ms (67%)
extra-infos-2015-11.tar.xz:
- metrics-lib: 0.274610 ms
- Stem[**]: 0.46 ms (168%)
extra-infos-2015-11.tar:
- metrics-lib: 0.2155 ms
- Stem: 0.68 ms (316%)
- Stem[**]: 0.42 ms (195%)
consensuses-2015-11.tar.xz:
- metrics-lib: 255.760446 ms
- Stem[**]: 913.12 ms (357%)
consensuses-2015-11.tar:
- metrics-lib: 246.713092 ms
- Stem: 1393.10 ms (565%)
- Stem[**]: 876.09 ms (355%)
consensuses-2015-11/:
- metrics-lib: 283.910864 ms
- Stem: 1303.53 ms (459%)
- Stem[**]: 873.45 ms (308%)
- Zoossh: 83 ms (29%)
microdescs-2015-11.tar.xz[*]:
- metrics-lib: 0.099397 ms
- Stem[**]: 0.33 ms (332%)
microdescs-2015-11.tar[*]:
- metrics-lib: 0.066566 ms
- Stem: 0.66 ms (991%)
- Stem[**]: 0.34 ms (511%)
[*] The microdescs* tarballs contain microdesc consensuses and microdescriptors, but I only cared about the latter; what I did is extract tarballs, delete microdesc consensuses, and re-create and re-compress tarballs
[**] Run with Python 3.5.1
Is Python 3 really that much faster than Python 2? Should we just omit Python 2 results from this comparison?
All the best, Karsten -----BEGIN PGP SIGNATURE----- Comment: GPGTools - http://gpgtools.org
iQEcBAEBAgAGBQJWmPd6AAoJEJD5dJfVqbCrW2IIAL7KyVxDbLczXjtzgwLxFjzw s9AjhRILb4cBUwr4N4bFAe6x2rXT5w0dEOweMqjcki7IQ4+/gcjok3PLvT6z6lUW 5pKHppU8OmaZItARvGRNlDxWt4E2SSP597GwTWr7rPwwjRRjXmqNPrWAUzq1eteB S8os9M2whsEntfUF+aPmZbu2oNzJYdnOL/B139MA72nuo9d6no3CXyTFfvT4a9kV K8vg1w54yDtyp15+uVGaJjfbQRJdPRmpjzkSntngnvSL098g1Rq7coRARMrIJ4BB 8+WjqtoU5IlnuMS3U/aC/FaXFWLz0vHoXci33ZP+kwmX4GywC1mC/QGbvinlkPk= =WQF6 -----END PGP SIGNATURE-----
Hi Karsten, hi Philipp, added these benchmarks to our site...
https://stem.torproject.org/tutorials/mirror_mirror_on_the_wall.html#are-the...
Cheers! -Damian
On Tue, Jan 12, 2016 at 09:40:35AM +0100, Karsten Loesing wrote:
Philipp, would you be able to write the Zoossh counterpart for the descriptor types supported by it? I'm even more curious now how those numbers compare to metrics-lib and Stem.
I'd love to, but I cannot promise when I'll be done with it :(
Cheers, Philipp
On Tue, Jan 12, 2016 at 09:40:35AM +0100, Karsten Loesing wrote:
Philipp, would you be able to write the Zoossh counterpart for the descriptor types supported by it?
I attached a small tool that should do the same thing Damian's script does for consensuses and server descriptors. Note, however, that it cannot processed tar archives. It expects as input directories that contain consensuses and server descriptors.
You can compile it with "go build benchmark.go".
Cheers, Philipp
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 13/01/16 16:28, Philipp Winter wrote:
On Tue, Jan 12, 2016 at 09:40:35AM +0100, Karsten Loesing wrote:
Philipp, would you be able to write the Zoossh counterpart for the descriptor types supported by it?
I attached a small tool that should do the same thing Damian's script does for consensuses and server descriptors. Note, however, that it cannot processed tar archives. It expects as input directories that contain consensuses and server descriptors.
You can compile it with "go build benchmark.go".
Cool, thanks! I added the Zoossh results below.
server-descriptors-2015-11.tar: - metrics-lib: 0.285430 ms (100%) - Stem: 1.02 ms (357%) - Zoossh: 0.458566 ms (161%)
extra-infos-2015-11.tar: - metrics-lib: 0.215500 ms (100%) - Stem: 0.68 ms (316%)
consensuses-2015-11.tar: - metrics-lib: 246.713092 ms (100%) - Stem: 1393.10 ms (565%) - Zoossh: 83 ms (34%)
microdescs-2015-11.tar: - metrics-lib: 0.066566 ms (100%) - Stem: 0.66 ms (991%)
Do the Zoossh results there look plausible?
All the best, Karsten
On Wed, Jan 13, 2016 at 05:47:03PM +0100, Karsten Loesing wrote:
Do the Zoossh results there look plausible?
I'm surprised that descriptor parsing is so slow, but I think the results are plausible, yes. I should look into it.
Thanks, Philipp
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 13/01/16 21:01, Philipp Winter wrote:
On Wed, Jan 13, 2016 at 05:47:03PM +0100, Karsten Loesing wrote:
Do the Zoossh results there look plausible?
I'm surprised that descriptor parsing is so slow, but I think the results are plausible, yes. I should look into it.
Not to worry. Turns out I compared the wrong numbers and Zoossh is still the fastest parser in town (see my earlier reply in this thread).
What you could try, though, is extend Zoossh to parse tarballs rather than directories. This is more than 2 times faster in metrics-lib, and it doesn't clutter your hard disk with thousands or millions of tiny files.
All the best, Karsten
What you could try, though, is extend Zoossh to parse tarballs rather than directories. This is more than 2 times faster in metrics-lib, and it doesn't clutter your hard disk with thousands or millions of tiny files.
For what it's worth processing tarballs rather than flat files made a huge difference for Stem as well (tempted to say it was a 5x improvement). Since you care so much about Zoossh's speed this could be a great way to make it even faster. :)