On Wed, Dec 19, 2012 at 5:45 PM, Simon simonhf@gmail.com wrote:
On Wed, Dec 19, 2012 at 1:49 PM, Nick Mathewson nickm@alum.mit.edu wrote:
On Wed, Dec 19, 2012 at 2:29 PM, Simon simonhf@gmail.com wrote:
[...]
- Large parts of the codebase have been written in a tightly coupled
style that needs refactoring before it can be tested without a live Tor network at hand.
Much automated (unit) testing is done my mocking data structures used by functions and/or mocking functions used by functions. This is possible even with tight coupling.
What's your favorite C mocking solution for integrating with existing codebases without much disruption?
FWIW, I'd be interested in starting to try some of what you're describing about mandatory coverage in the 0.2.5 release series, for which the merge window should open in Feb/March.
[...]
If you like and you have time, it would be cool to stop by the tickets on trac.torproject.org for milestone "Tor: 0.2.4.x-final" in state "needs_review" and look to see whether you think any of them have code that would be amenable to new tests, or to look through currently untested functions and try to figure out how to make more of them tested and testable.
If I were you then I'd first try to create an end-to-end system/integration test via localhost that works via make test. This might involve refactoring the production code or even re-arranging source bases etc. The test script would build and/or mock all necessary parts, bring up the localhost Tor network, run a variety of end-to-end tests, and shut down the localhost Tor network.
We're a part of the way there, then. Like I said, we've got multiple network mocking/simulation tools. With a simple Chutney network plus the unit tests, we're at ~ 53% coverage... and all Chutney is doing there is setting up a 10-node network and letting it all bootstrap, without actually doing any end-to-end tests.
(ExperimenTor and Shadow are both heavier-weight alternatives for running bigger networks, but I think that here they might not be needed, since their focus seems to be on performance measurement. Chutney is enough for basic integration testing, and has the advantage that it's running unmodified Tor binaries. Stem is interesting here too, since it exercises Tor's control port protocol pretty heavily.)
I've uploaded the gcov output for running the unit tests, then running chutney with the networks/basic configuration, at http://www.wangafu.net/~nickm/volatile/gcov-20121219.tar.xz . (Warning, evil archive file! It will dump all the gcov files in your cwd.)
The 5 most covered modules (by LOC exercised) are: dirvote.c.gcov 553 1222 68.85 config.c.gcov 1429 1229 46.24 util.c.gcov 470 1352 74.20 routerparse.c.gcov 932 1436 60.64 routerlist.c.gcov 858 1509 63.75
The 5 most uncovered modules (by LOC not exercised) are: routerparse.c.gcov 932 1436 60.64 connection_edge.c.gcov 972 384 28.32 rendservice.c.gcov 1249 202 13.92 config.c.gcov 1429 1229 46.24 control.c.gcov 2076 201 8.83
The 5 most uncovered nontrivial modules (by % not exercised) are: dnsserv.c.gcov 148 0 0.00 procmon.c.gcov 48 0 0.00 rendmid.c.gcov 135 0 0.00 status.c.gcov 50 0 0.00 rendclient.c.gcov 506 26 4.89
Next the makefiles should be doctored so that it is easier to discover the coverage, e.g. something like make test-coverage ? At this point the happy path coverage should be much larger than it is today but still way off the desirable 80% to 100% range. At this point one would consider adding the discipline to cover all new lines. The patch author has the personal choice of using unit and/or system/integration level testing to achieve coverage. And there is also a chance that no extra coverage is necessary because the patch is already coverage in the happy path.
If you like the end-to-end localhost Tor network idea then I would be happy to collaborate on creating such a mechanism as a first step.
Yes, I like this idea a lot, especially if you're able to help with it, especially if it's based on an already-existing launch-a-network-on-localhost tool. I'm going to be travelling a lot for the rest of December, but let's set up a time to chat in the new year about how to get started.
Preemptive Happy New Year, -- Nick