On Thu, Dec 20, 2012 at 2:43 PM, Damian Johnson atagar@torproject.org wrote:
You want to point at your tor binary, I think, not just the path (i.e. something like "--tor ../tor-2.3.*/src/or/tor")
That did the trick, thanks:
Oops, I'm sorry about not being clearer about that.
No problem.
Why do the tests take so long to run? I noticed that most of the time almost no CPU is used and hardly any network is used.
You consider 34 seconds a long time? Heh, the test suite we have at my work takes on the order of twenty minutes to run...
Yes :-) I've seen projects which have tests which take nearly 10 hours to run. However, the longer the tests take to run then the less likely that developers will run them. IMO all tests should ideally take no more than 1 to 2 minutes to run. So 34 seconds is pretty good except that ideally Tor needs to have about 100 times as many tests to get code coverage and quality (of Tor itself) up to the 90% plus range. So with this few tests taking 34 seconds then 100 times more tests would take in the many minutes / hours range. I'm thinking that many thousands of tests should take no longer than 1 to 2 minutes to run.
You can see the individual test runtimes to get an idea of where the time's going. The longest tests are things that parse the entire consensus. The sleep() calls you mentioned account for precious little (in total a few seconds) which is mostly to test things like "Tor emits a BW event every second". Patches welcome.
It would be great if the tests themselves reported their own times. And also had a common output format to the standard Tor make test results. When I run the tests then it's easy to see which ones take longer because there are large pauses as text scrolls up the screen. However, during those pauses then I'm seeing almost no CPU, network, or disk activity which leads me to believe that some tests are not written as well as they could be.
Could the individual tests be somehow run in parallel to speed things up?
See "Run integration targets in parallel" on...
Thanks. So that's a feature on the todo list :-) It looks like the tests are starting up daemons using fixed ports which stops other tests from running in parallel. In the past I have solved this problem by getting common test code to start a particular daemon listening on port zero which makes the OS choose a non-used port for listening on. When doing this then the common test code needs to somehow discover which port the daemon ends up listening on when it is started. A common way to do this is to get the daemon to output the port to its log file. In this way the common test code not only discovers which unique port the daemon is listening on, but also for daemons which take a little time to start up, then the log output with the listening port may also signify when the daemon is ready for work. In this way many tests can run in parallel without having to worry about port collision. However, the production code for the daemon being tested may have to be changed in order to be able to listen on port zero and/or report the port that it actually ends up listening on.
So what's the difference between Stem tests and 'Chutney'? AFAIK Chutney is a bunch of WIP Python scripts to setup and execute end-to-end Tor tests. Are the Stem tests not also doing something very similar? Why are neither set of tests included in the Tor repo so that they can be run using make test?
-- Simon
Cheers! -Damian _______________________________________________ tor-dev mailing list tor-dev@lists.torproject.org https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev