Apologies for breaking the thread, I didn't have the original message.
From the mentioned paper (which I've only skimmed):
``The deployed model considers a time interval of seven (7) days to model connection rates (i.e. $t_i - t_{i−1} = 7$ days).''
If I understand correctly, this means trends occurring on a week-to-week basis (or larger periods) are considered and higher-frequency trends are undesirable? In that case, perhaps pre-processing the data by filtering would be useful.
Attached (1.png) is an example (in red) of filtering out all frequencies higher than that corresponding to a one week period, compared to the original data (green). This is the entire data for Switzerland, abscissa in seconds.
The result is a little less noise, which might help with your algorithm.
The same filter applied to the Egypt and Iran data (2.png and 3.png respectively) doesn't harm the signal for those two censorship events, at least not by visual inspection. (You'd probably want to use a Hanning window or something, to avoid those artifacts at the extreme ends of the red graphs.)
But filtering like this would also mean that the signal of an event which occurs and is over in less than a week, like this week's, is also lost...