Passing the buck? Why censorship isn’t sufficient to tackle extremism

This morning, the House of Commons’ Home Affairs Select Committee released a report, entitled “Radicalisation: the counter-narrative and identifying the tipping point”. It follows a year-long inquiry focused on extremism affecting Muslim communities, often spread by Daesh and similar terrorist organisations. Much of this content is spread through various internet channels, especially social media sites such as Youtube, Facebook and Twitter.

The headline statement here is a strident criticism of social media companies, who are described as “consciously failing” to prevent their sites from being used to spread radicalising and violent content. The report claims that existing efforts to quell extremism are insufficient, and that social media companies “must accept that the hundreds of millions in revenues generated from billions of people using their products needs to be accompanied by a greater sense of responsibility and ownership for the impact that extremist material on their sites is having.”

In many ways, the broad message of this report is nothing new. In November 2014, the incoming head of GCHQ wrote an open letter to social media companies, claiming that their platforms had become “command and control networks of choice for terrorists and criminals”, and calling for companies to support British Intelligence agencies in combatting online extremism. More recently, Demos has been looking at the effectiveness and difficulties involved in promoting counter-speech on Facebook; a potentially effective means of limiting the impact of extremist content that does make it into the hands of social media users. We’ve also worked to identify and examine Islamophobic content on Twitter, which is increasingly prevalent in the wake of large scale terrorist attacks.

The report published today makes a number of helpful recommendations for improving the current situation. At present, even after material has been identified as dangerous by the Police or Government agencies, the process of liasing with social media companies to get it removed can be time-consuming and arduous, and there is a pressing need to streamline these methods. Part of this cooperation needs to involve increased transparency over the numbers and types of content that have been taken down, and an explanation, where content has been assessed and left up, of why this decision was made.

The report also bemoans the critical shortage of Punjabi, Urdu and Kashmiri speakers within British security services, and makes the case for increased funding to the Met’s Internet Counter Terrorism unit to enable it to respond 24/7 to reports of extremism. None of these issues will be trivial to resolve, but they are at least relatively straightforward problems with clear solutions, and should be addressed as soon as possible.

Under all of this, however, looms a much larger and thornier question: what can and should social media companies be doing to identify and remove problematic content? On this point, today’s publication is keen to point out areas for improvement, but broadly fails to suggest any meaningful answers.

In the report, social media companies are repeatedly berated for failing to employ a sufficient number of analysts to identify and remove content. It is not clear, however, that throwing more people at the problem will work. With user bases in the billions, many social media platforms have long surpassed the size where hiring a few hundred people to manually search for and remove posts is likely to provide an effective solution.

The report also suggests that companies need to invest in technical capabilities to identify extremist content – if Google can use algorithms to predict consumer spending habits, why can’t it pinpoint extremism? This, however, is a vastly different task from that of first defining and then identifying radicalising content. While algorithms undoubtedly have an important role to play in helping to sift through vast swathes of data, there is no technical silver bullet here. And the problem at hand is likely to require entirely novel approaches and technologies to crack.

Even a process which did enable companies to recognise problematic content, however, would run up against a basic problem fundamental to social media sites. These platforms thrive on content, and so it’s in their best interests to make the process of creating an account and uploading something as fast and painless as possible. As a result, it’s a lot easier for a user to publish content than it is for the site to take it down. In the time it takes for a post to be identified as dangerous, flagged, examined and then removed, hundreds more could spring up in its place. Additionally, if a single platform does get too difficult for groups to post on, they are always free to move to posting on another.

The social media companies, then, are engaged in an arms race with extremists, who have developers, users and content creators ready to respond to and circumvent even concerted efforts to prevent extremist content from staying online.

Platforms also have a difficult tightrope to tread between complying with the regulations put in place by the countries in which they operate, often requiring them to disclose personal information, and an increasing demand from their users for robust privacy. Recently, for example, this latter impulse has led to Apple’s iMessage service and the messaging company Whatsapp announcing that all messages sent over their platforms will be end-to-end encrypted. As this encryption becomes industry standard, striking the balance between government requirements for disclosure and user expectations is going to be increasingly tricky for social media companies.

At the heart of all of this lies the question of how effective censorship can ever be. While social media companies certainly have a role to play in reducing the amount of extremist content online, and limiting the number of people exposed to it, there will never be a watertight solution. No matter the approach taken, the decentralised, anonymous nature of the internet means that, inevitably, some people are going to see this content.

The long-term solution to combating radicalisation, then, cannot simply be one of blanket censorship, but should rather focus on skills and education. The challenge is to increase levels of ‘digital literacy’ – to enable people to calmly assess the claims made in advertisements for terrorist organisations, to see through the slickly edited videos and incitements to murder. The challenge is to educate people so that, when they inevitably come across this content online, they are more likely to reject it – not simply because they’ve seen a glossier, more attractive advertisement for the western way of life, but because they have been given the tools which enable them to rationally assess its claims and motives.

This task is clearly beyond the sole remit of the social media companies – it is one in which the government and the education system, civil society and religious communities and the media must all play a role. It is a task which needs to be engaged with, soon, and this latest report only serves to highlight its urgency.