On Sun, Aug 16, 2015 at 02:44:40PM -0700, Damian Johnson wrote:
Ideally, zoossh should do the heavy lifting as it's implemented in a compiled language.
This is assuming zoossh is dramatically faster than Stem by virtue of being compiled. I know we've discussed this before but I forget the results - with the latest tip of Stem (ie, with lazy loading) how do they compare? I'd expect time to be mostly bound by disk IO, so little to no difference.
zoossh's test framework says that it takes 36364357 nanoseconds to lazily parse a consensus that is cached in memory (to eliminate the I/O bottleneck). That amounts to approximately 27 consensuses a second.
I used the following simple Python script to get a similar number for Stem:
with open(file_name) as consensus_file: for router in stem.descriptor.parse_file(consensus_file, 'network-status-consensus-3 1.0', document_handler = stem.descriptor.DocumentHandler.ENTRIES): pass
This script manages to parse 24 consensus files in ~13 seconds, which amounts to 1.8 consensuses a second. Let me know if there's a more efficient way to do this in Stem.
Interesting! First thought is 'wonder if zoossh is even reading the file content'. Couple quick things to try are...
with open(file_name) as consensus_file: consensus_file.read()
Disk IO is negligible for both tests because the file content is cached in memory. As expected, consensus_file.read() terminates almost instantly.
FWIW, zoossh doesn't parse as much as Stem does, so it's not quite an apple-to-apple comparison. For example, exit policies are not parsed and simply stored as strings for now.
Cheers, Philipp