On Thu, Dec 20, 2012 at 11:25 AM, Nick Mathewson nickm@alum.mit.edu wrote:
On Wed, Dec 19, 2012 at 8:57 PM, Simon simonhf@gmail.com wrote:
On Wed, Dec 19, 2012 at 4:35 PM, Nick Mathewson nickm@alum.mit.edu wrote:
What's your favorite C mocking solution for integrating with existing codebases without much disruption?
This could be worth a separate thread. I'm not aware of really good solutions for C.
[....]
I had a look around and found a few more possibilities, including:
https://code.google.com/p/test-dept/ http://throwtheswitch.org/white-papers/cmock-intro.html https://code.google.com/p/cmockery/
None of them looks compellingly great, TBH. The methods that people seem to be using code-rewriting tricks, mandatory macro tricks, LLVM tricks, x86 assembly tricks, and uglier stuff still.
Perhaps somebody else has a good recommendation? It would be sad if we went ant built our own.
Yep, I've seen those and like you my socks have not been knocked off :-) It is sad but the reality is that there are relatively few teams actively measuring and/or enforcing code coverage. And from those that do then most say it's too difficult to get coverage above the 70% to 90% range... probably because the tools for mocking etc don't exist and/or are too limiting e.g. compiler specific. And then there are even less teams doing high levels of cross platform coverage. I've only ever heard of one open source project that has 100% code coverage with 1000s of tests that run in seconds and happens to be C and cross platform too.
[...]
We're a part of the way there, then. Like I said, we've got multiple network mocking/simulation tools. With a simple Chutney network plus the unit tests, we're at ~ 53% coverage... and all Chutney is doing there is setting up a 10-node network and letting it all bootstrap, without actually doing any end-to-end tests.
Sounds good.
I guess Chutney must be a separate project since I can't find it in the Tor sources .tar.gz ?
Yup. It's accessible from gitweb.torproject.org. I'd be surprised if more than 5 people have tried to run it, ever.
:-)
Why make it a separate project? Why not make it part of make test in the Tor project?
(More results: unittests + chutney gives 52.60% coverage. Unittests + stem gives 39.03% coverage. Unit tests + stem + chutney gives 54.49% coverage.)
(ExperimenTor and Shadow are both heavier-weight alternatives for running bigger networks, but I think that here they might not be needed, since their focus seems to be on performance measurement. Chutney is enough for basic integration testing, and has the advantage that it's running unmodified Tor binaries. Stem is interesting here too, since it exercises Tor's control port protocol pretty heavily.)
More links: https://shadow.cs.umn.edu/ http://crysp.uwaterloo.ca/software/exptor/
I'm not sure anybody's ever tried to do coverage with them.
[..]
Yes, I like this idea a lot, especially if you're able to help with it, especially if it's based on an already-existing launch-a-network-on-localhost tool.
I'm not aware of such a tool.
Chutney is such a tool; ExperimenTor can be made (I think) to act as such a tool; Shadow is a little more complicated.
The way I have done it in the past is to use Perl to lunch and monitor the various processes. The good thing about Perl is that it can run unmodified on both *nix and Windows, plus you can do one-liners.
[...]
Hm. I'm not going to say that I'd turn down work in perl, but the rest of the Tor developers don't spend much time using perl. I don't know that any of us have done a perl program of over 100 lines in the last 5-6 years. I'm not saying "perl sucks" or "I refuse to use anything written in perl", but you should be aware that if you do write anything in perl, there probably aren't a lot of other people involved with Tor right now with the knowhow to effectively collaborate on the perl parts or help to maintain them.
We could just write everything in C?
-- Simon