On Sat, May 18, 2013 at 12:44 AM, not me anemenja@gmail.com wrote:
IMNSHO, its dense to even want to use pointers in this way. Why the hell are you converting pointers in this way in the first place, its just asking for a horrible mess.
This is actually a normal and useful thing to do in C. (I think you're used to C++, where it is indeed much less useful due to the richer variety of abstractions.)
Without having looked into this in much detail, I suspect that most of the cases where this comes up in Tor are due to passing a numeric value as data to a callback API that's specified in terms of void*. Note that this is strictly speaking casting an _integer_ to a _pointer_ and back and expecting this not to lose information, but the compiler can't tell which way round it's being done.
A long is not guaranteed to be the same size as a pointer
A long _was_ guaranteed to be _at least as large as_ a pointer in C89, and Microsoft still claims conformance to C89, not to any newer version of the C standard, despite this willful violation. (It is not a C99 violation. See my other message.)
if you're doing this, you're doing it wrong
I think if you read the code you will find that there is no better way to do what it is doing in C.
[re POSIX]
Funny, I do it all the time without problem, of course I generally avoid standards like posix which despite saying its portable, actually isnt. Although TBF, osx is my big exception, as it feels more like writing c on a sunos 4 box than anything modern. Then again, I also get an easier out by preferring C++ anymore and making use of the STL which allows me to avoid a lot of related pitfalls (specifically thinking of file operations).
Your experience is precisely the opposite of mine, then. POSIX.1-2001 functionality, including most of its optional bits, is reliably available and correct everywhere except Windows in my experience. OSX does give me grief sometimes but generally only due to its insistence on doing shared libraries differently from everyone else. And the STL is nice and all but I don't think the C++ standard library *has* any filesystem stuff other than iostreams, which are not particularly helpful (maybe they added something in C++11?)
...
Either way, I wasn't referencing those so much as things like vastly superior heuristics at reordering variables, being especially careful with function pointers and putting them into registers, etc-- nevermind things like exception handling that doesnt negate all of the stack/heap/etc cookies, so on and so forth. Put quite simply, if you're using mingw to ship anything serious for a windows platform, you're being irresponsible with your users computers. MSVC express is free as in go download it now, so there's really no excuses anymore. This doesn't even touch that it's actually a better compiler in terms of the performance of the code generated.
I can't really comment on this not having looked at codegen differences in detail, but I think the benefits of much of this hardening stuff are wildly overblown. Are you familiar with the current state of the art in bypassing them?
Anyway, like Nick I would be happy to see MSVC support patches. I suggest you look into the possibility of using the existing autotools-based build system to drive MSVC; I understand that this is supported in the latest automake, and it would mean that the build harness is much less likely to bitrot. You would still need the MSYS environment to run the build in, but you wouldn't be using their C compiler.
It is not clear to me why you need Tor to be 64-bit. It runs as a separate process and acts as a local network proxy. It should be able to do that just fine for 64-bit processes while continuing to be 32-bit itself. Please clarify.
I really hate this line of logic, look I get it that this code was written obviously sorta ad-hoc and targeted with only 32-bit in mind and that extending it to 64-bit has been a bit of a process and that the obvious advantages to most users are going to be mildly neglible and whatever performance increases they gained are going to be lost in the network, but it drives me nuts that the answer is for everyone just to stop using the full potential of their computer
I use "this line of logic" to try to decide on priorities. We obviously like the idea of that 15% performance gain due to the bigger register set, if it pans out, and there are other known concrete benefits to going 64-bit (better ASLR entropy, higher-performance cryptographic primitives in OpenSSL, that sort of thing) but we don't know if it's worth sinking a bunch of developer time into it compared to other things we could be doing. If there is a specific thing that you can't do right now because the program runs as 32-bit (and not just because that makes it slower) then suddenly 64-bit builds are more interesting.
more so, I'm not the typical user and will be dealing with very very large datasets and have requirements elsewhere (ie in the database) that just make it more of a project to inverse simply for tor than its worth. Truth be told, I have very little interest in the overall package of tor, utilizing the proxy, or one of the million and one front-ends that really serve no purpose-- the interest in tor itself is the existing infrastructure, the network that already utilizes it. I was investigating the libonionrouter package, because when I saw it I thought 'finally! someone did all the heavy lifting and it wasnt me" and then realized it was just a wrapper around the tor code base, which is how I got here.
It would help us understand where you're coming from if you talk a little about your larger goals. What are you trying to do for which the existing network of Tor nodes is useful but the existing software is not fit for purpose?
zw