Hello,
I was pointed to this mailing list by Ben Zevenbergen.
It seems like there are a few familiar faces in here and I believe some of you are already quite familiar with the tool in question.
We have recently had some discussions on our OONI mailing list about ethics of internet censorship related measurements and what should be the best procedure for getting informed consent from our users.
You can find this thread here: https://lists.torproject.org/pipermail/ooni-dev/2014-December/000205.html
A volunteer started writing up some improvements to our current warning message (that is found here: https://github.com/TheTorProject/ooni-probe#read-this-before-running-oonipro...) and you can find the improvements to it here: https://lists.torproject.org/pipermail/ooni-dev/2015-January/000208.html
Some people have pointed out that the above message contains some wording that is a bit too vague and that can lead to excessively scaring users (or possibly even putting them in danger because they have acknoledged that what they are doing could be legal). This discussion mainly occurred on IRC so unfortunately it's not captured anywhere, but I would be happy to further elaborate on it if you are interested.
What we currently would need most is somebody that takes a look at the tool and thinks about what could be the real risks that a user of it could possibly face (if any) and come up with a wording that makes these risks clear to them.
I am happy to further discuss this either via Skype or on our mailing list.
~ Arturo
Hi Arturo,
Thanks for posting this message and request. I've been meaning to respond on the thread OONI list as well (Phillip Winter asked me to check it out).
With the panel I will have a closer look at the OONI warnings and user guidance.
In the interim I can share some information on how we are doing informed consent in the ICLAB project (project site: https://iclab.org/ github: https://github.com/iclab). As you know this project is a collaboration between Citizen Lab, Stony Brook, and Princeton University. As we are doing this through academic institutions we required ethics board review and approval before we could run measurements.
We have a very similar model to the one you are doing in OONI. We are providing a group of users with Raspberry Pi units and running pilot tests with them of our software and overall research protocol. Each user goes through an informed consent process where we present the objectives of the project, potential risks, and explain the user's rights as a research participant. You can find a copy of our informed consent document here: https://drive.google.com/file/d/0B_KamVIs1VmmV1dxZF9tRGRrTVU/view?usp=sharin... Only users who have agreed to this informed consent document are able to run tests in our system.
A challenge that you have also identified is that the risk of running measurements is highly contextual and depends on the specific location and situation. This is a hard problem for sure.
In our research protocol we have the idea to provide users with some baseline information about their vantage point that could help determine the relative research risk on a scale of Low - Medium - High. We put together indicators like the Freedom House “Freedom on the Net score” and Economist “Democracy Index score” to provide a baseline that could be combined with other contextual information. These are not perfect metrics and we are still developing this idea further but through this combination of information (and other potential indicators) we are working towards getting an approximation of relative risk. In our research protocol we would not allow measurements to be run in a "High" risk situation. Examples of high risk include areas with armed conflict or unrest (e.g. Syria) or countries that are otherwise clearly too risky and impractical for getting measurements (e.g., North Korea, Cuba). As part of the pilot phase of our research protocol we will be working with our pilot users to further refine this idea and see if and how it can better capture risk levels and scale.
I think any project running client side measurements are going to face similar challenges so we're be very open to figuring out how we can collaborate on best practices and have more discussions on this topic.
Will have a closer look at your documents and try to provide more feedback. I can jump on calls for more detailed chat if that's desirable as well.
All the best, Masashi