Today saw the release of the government’s Online Harms White Paper, a publication spearheaded by the Secretary of State for Digital, Culture, Media & Sport, Jeremy Wright, and the Home Secretary, Sajid Javid.
Their report lays out plans to tackle online harms (a somewhat collective term, but it covers everything from terrorism and child sexual exploitation, to cyberbullying and screen time). They’ve called for a new regulatory framework that will make online companies tackle this harmful content, and an independent regulatory body who will ensure that action will be taken.
Not everyone is convinced. Organisations such as Open Rights Group, Article 19, and Index on Censorship have warned that this could amount to state regulation of freedom of expression.
Here's our initial take on the UK government's new proposal to tackle #onlineharms: https://t.co/Q1ZhyKycqx
— Privacy International (@privacyint) April 8, 2019
When it comes to regulating the internet we must move with care. Failure to do so will introduce – rather than reduce – "online harms".
Our initial suggestions below 💪 pic.twitter.com/AE9w0fMNj4
Some of the issues the white paper discusses are, without doubt, problematic. Yes, there are problems with online terrorism content, with organised crime, and with hate crime, harassment, and incitement of violence. No one contests that the Internet is a medium that enables these things to spread and to reach vulnerable people. What’s not yet evident is how to best defuse it.
It’s not quite clear how a duty of care can meet this: we’re still trying to fight it on a technological front.
Fake news is a hot topic, and the report rightly emphasises the need to combat it. It’s not quite clear how a duty of care can meet this: we’re still trying to fight it on a technological front. Somewhat more tenuous an issue is the government’s concern with screen time, perhaps put there as a sop to concerned parents, given that the Royal College of Paediatrics and Child Health recently emphasised that there little to no evidence of direct harm.
While the government’s paper is well-intentioned, it is much less well-defined. The opening paragraph of the Executive Summary talks of “the prevalence of illegal and harmful content online”, only to be followed in the very next paragraph by “Illegal and unacceptable content and activity”. Illegal activity is defined by law, and harmful activity could be evidenced, but how do we deem what is acceptable and what is not? More importantly, who gets to decide?