Hi Arturo,
Thanks for posting this message and request. I've been meaning to respond on the thread OONI list as well (Phillip Winter asked me to check it out).
With the panel I will have a closer look at the OONI warnings and user guidance.
In the interim I can share some information on how we are doing informed consent in the ICLAB project (project site: https://iclab.org/ github: https://github.com/iclab). As you know this project is a collaboration between Citizen Lab, Stony Brook, and Princeton University. As we are doing this through academic institutions we required ethics board review and approval before we could run measurements.
We have a very similar model to the one you are doing in OONI. We are providing a group of users with Raspberry Pi units and running pilot tests with them of our software and overall research protocol. Each user goes through an informed consent process where we present the objectives of the project, potential risks, and explain the user's rights as a research participant. You can find a copy of our informed consent document here: https://drive.google.com/file/d/0B_KamVIs1VmmV1dxZF9tRGRrTVU/view?usp=sharin... Only users who have agreed to this informed consent document are able to run tests in our system.
A challenge that you have also identified is that the risk of running measurements is highly contextual and depends on the specific location and situation. This is a hard problem for sure.
In our research protocol we have the idea to provide users with some baseline information about their vantage point that could help determine the relative research risk on a scale of Low - Medium - High. We put together indicators like the Freedom House “Freedom on the Net score” and Economist “Democracy Index score” to provide a baseline that could be combined with other contextual information. These are not perfect metrics and we are still developing this idea further but through this combination of information (and other potential indicators) we are working towards getting an approximation of relative risk. In our research protocol we would not allow measurements to be run in a "High" risk situation. Examples of high risk include areas with armed conflict or unrest (e.g. Syria) or countries that are otherwise clearly too risky and impractical for getting measurements (e.g., North Korea, Cuba). As part of the pilot phase of our research protocol we will be working with our pilot users to further refine this idea and see if and how it can better capture risk levels and scale.
I think any project running client side measurements are going to face similar challenges so we're be very open to figuring out how we can collaborate on best practices and have more discussions on this topic.
Will have a closer look at your documents and try to provide more feedback. I can jump on calls for more detailed chat if that's desirable as well.
All the best, Masashi