On Mon, May 18, 2020 at 1:01 PM yoehoduv@protonmail.com wrote:
When a cell with a small effort in the queue has any chance of getting selected, the optimal strategy for a legitimate client would be to compute nonces and send as many nonces as possible until it causes congestion on his network. Instead when only the cell with the highest effort is processed, sending more than one nonces per connection does no good for a client. We want each legitimate client to send only one nonce per connection.
Sending many requests is not the optimal strategy. One high-effort request would have exactly the same chance of being selected as many low-effort requests with the same total effort. The difference is that with many requests, the client wouldn't know which rendezvous would be selected, so he'd have to waste resources on opening many circuits.
Anyways, I suggest using a priority queue first and see how it works in practice. To allow efficient insertion and trimming, the queue can be implemented as a red-black tree.
As of trimming the priority queue, we don't have to use a heap. We can compress the effort into maybe 7 bits, and then store the requests in 128 arrays. Then trimming it is freeing an array. The compression can be something like floating point.
~clz(POW_NONCE) << 1 | (POW_NONCE >> (127 - clz(POW_NONCE))) & 1
That is, take the number of leading zeros and by one bit on the right of the leftmost 1 bit, then complement the first part to preserve order. We can expect the number of leading zeros to be less than 64, so this will take 7 bits. A decrement of this value means about 1.3 - 1.5 times more work, which should be finely enough grained.
What you are describing is called a hashtable. The question is: what happens when one of the arrays gets filled up? You would have to discard all additional requests coming into that bucket. With your construction, it's very likely that most requests would end up in just a few buckets and the rest would remain empty (e.g. all buckets for more than 40 leading zeroes).
T.