Today saw the release of the government’s Online Harms White Paper, a publication spearheaded by the Secretary of State for Digital, Culture, Media & Sport, Jeremy Wright, and the Home Secretary, Sajid Javid.
Their report lays out plans to tackle online harms (a somewhat collective term, but it covers everything from terrorism and child sexual exploitation, to cyberbullying and screen time). They’ve called for a new regulatory framework that will make online companies tackle this harmful content, and an independent regulatory body who will ensure that action will be taken.
When it comes to regulating the internet we must move with care. Failure to do so will introduce – rather than reduce – "online harms".
Our initial suggestions below 💪 pic.twitter.com/AE9w0fMNj4
— Privacy International (@privacyint) April 8, 2019
Some of the issues the white paper discusses are, without doubt, problematic. Yes, there are problems with online terrorism content, with organised crime, and with hate crime, harassment, and incitement of violence. No one contests that the Internet is a medium that enables these things to spread and to reach vulnerable people. What’s not yet evident is how to best defuse it.
It’s not quite clear how a duty of care can meet this: we’re still trying to fight it on a technological front.
Fake news is a hot topic, and the report rightly emphasises the need to combat it. It’s not quite clear how a duty of care can meet this: we’re still trying to fight it on a technological front. Somewhat more tenuous an issue is the government’s concern with screen time, perhaps put there as a sop to concerned parents, given that the Royal College of Paediatrics and Child Health recently emphasised that there little to no evidence of direct harm.
While the government’s paper is well-intentioned, it is much less well-defined. The opening paragraph of the Executive Summary talks of “the prevalence of illegal and harmful content online”, only to be followed in the very next paragraph by “Illegal and unacceptable content and activity”. Illegal activity is defined by law, and harmful activity could be evidenced, but how do we deem what is acceptable and what is not? More importantly, who gets to decide?
There is definitely a case to be made for content providers to be more accountable for and more transparent about the information they make available. The trouble is that the ‘duty of care’ model, where social media companies are liable, could mean we see blunt and catch-all policing of sites, known as overblocking. That’s not good from a freedom of expression point of view – or from a business point of view.
In December 2018, Tumblr banned the vast majority of not-safe-for-work content so that it could stay in Apple’s good books and on its App Store. Tumblr’s traffic fell by 20 per cent. Tumblr had been a huge source of support and sharing for sexual subcultures; the adult content ban also policed women’s bodies in particular, deeming them censorable. Tumblr implied that its changes were to fight the spread of child abuse imagery, but its heavy-handed, broad approach to remove the illegal also targeted the ‘unacceptable’, resulting in moral censorship.
Neil Brown, Internet, telecoms and technology lawyer and managing director of law firm decoded:Legal. has concerns: “How much of an imposition into the fundamental right of free speech are we prepared to accept in the name of preventing harm? Some will err towards overblocking in the name of protection, others will stress the challenges — moral, technical, and legal — in monitoring everyone’s speech to determine if it is to be permitted.” He stresses the practical limitations: “The white paper takes aim at lawful but ‘harmful’ or ‘unacceptable’ content, as well as illegal material, but programming computers to make these decisions at the kind of scale needed to meet the government’s expectations will be hugely challenging, if, indeed, possible at all.”
The government admits it doesn’t have all the answers yet. Consultation questions are peppered throughout the white paper, and individuals and organisations are invited to comment on them. They’re very concerned about accurate information: “Our society is built on confidence in public institutions, trust in electoral processes” the reports tells us. The government wants “a free, open and secure internet”. Let’s hope we can find the correct balance so that our right to expression is equally free, open and secure.
Kate Devlin is a writer and academic, and is Senior Lecturer in Social and Cultural Artificial Intelligence at King’s College London | @DrKateDevlin