David Anderson QC, the current Independent Reviewer of counter-terrorism powers is conducting a review of internet surveillance. Anyone who thinks there aren’t independent minded people watching the watchers really should check out his website and excellent Twitter feed.
He reads a lot, Mr Anderson. And he recently found the time to take a look at my e-short ‘Orwell versus the Terrorists’, published last week with Random House. But not much gets past him, and he emailed me to clarify a statement I’d made in that essay, and ask me a couple of follow up questions:
I don’t want a society in which there are places terrorists or organised criminals can communicate utterly and inexorably beyond the reach of legally constituted intelligence agencies. I don’t want a single computer they can’t track, a code they can’t break, an email account they can’t access – I just want to make sure these powers are used in a very limited way, based on legal authority, and driven by clear principles we can all understand.
Does that mean you oppose universal end to end encryption?
Do you accept that “they” will include agencies (Russian) that don’t care much for authority or principles?
Does your formulation apply to retained (past) data/content as well?”
This gave me pause for thought, and I wrote a detailed reply, which elaborates my view on encryption. It might be of some interest to more than just David Anderson, so I thought I’d share my response here.
I believe in the principle: that there should not be a place that is, and is known to be, entirely off limits to law enforcement. Would we allow that offline? That you mustn’t break this lock, or go into that room. I rather think not. I think it’s perfectly legitimate to say we want police to have the capability to access anything and everything – providing they are using the principles of proportionality and necessity about when to apply it. (Even lawyers’ confidential letters to their clients are not off limits – and personally think that’s acceptable: given your role, you may disagree…).
In a sense, this is the more significant difference and important distinction between the sides: whether you agree that government should create the capabilities to be able to access everything, even if rarely used? I think the meaningful fault line is between those who generally say ‘yes, as long as it’s not misused: hence oversight improvement’ and those who say ‘no, because it will always be misused, now or in future, and even building the capability is a form of intrusion’.
I tend to the first (social/liberal democrat) others to the second (liberal/libertarian). Both are legitimate positions to take, both are not stupid, and shouldn’t be treated with the sort of derision that tends to characterise the discussion.
The problem with my position is, I think, is mainly a technical one. In principle I’m confident, but not in terms of what it would mean in practice. Creating back doors to encryption would be too readily mis-used by non-democratic regimes; ending end-to-end encryption or conducting bulk data access tends to undermine confidence in the entire net. It’s an unacceptable price to pay in my view.
But, at least, these are the important parameters of debate. I end up concluding that: a) in principle I want everything accessible to law enforcement / intelligence but with powerful oversight that’s publicly understood (and we’re not there yet); but recognise that b) it’s probably not possible because of technical issues as the costs would be to create to get there.
So I’d like to get as close as possible without resulting in the problems I’ve outlined, and subsequently look for alternatives to fill the gap. Hence more powers, for example, for targeted malware. If you cannot break encryption, install malware on a computer. You still then have a position where there isn’t a piece of information technically inaccessible for the law, even if it is a little difficult.”