We’ve become accustomed to things being delayed over the last couple of years, but even by recent standards, the UK Government’s Online Safety Bill has been a long time coming.
It’s been three years since a White Paper was published, proposing greater power over potentially harmful online content.
Since then, the usual freedom-of-speech versus censoring-hate-speech debate has raged – mostly online – while politicians have been side-tracked by bigger issues.
Last month, two new duties were added to the Online Safety Bill, making this a timely moment to consider how it might affect our interactions with the online world.
What’s the OSB all about?
Since the World Wide Web debuted in 1991, the internet has been only lightly regulated, and usually by international agencies rather than domestic governments.
For instance, ICANN was founded in 1998 to regulate and protect against domain mismanagement and cybersquatting.
The internet’s global nature has hitherto made domestic policing of online spaces incredibly difficult.
Nonetheless, the UK Government is determined to tackle unsavoury online phenomena like trolling and fraud.
It comes as no surprise that pornography is also in their sights – blocking illegal material while making it harder for children to view legal sites intended for over-18s.
Although our Government can’t directly force overseas bodies like Facebook to change their policies, it will introduce a domestic duty of care regarding potentially harmful content.
We’ve already seen this with GDPR regulations in Europe, which require even small businesses to offer cookie management consent.
What is the Online Security Bill proposing?
The OSB will cover any user-to-user service, which extends far beyond websites to areas like photographs, databases and even oral communications.
It also extends far beyond verified users. If content could be in any way encountered or experienced by anyone, it’s covered in the Bill.
Nor does the host website need to be based in this country. If it targets UK users, reaches a significant number of people here or can simply be accessed by Brits, it falls within the Bill’s scope.
Ofcom would be given new powers to block access to websites, services and apps, including content found through search engines or online adverts.
They could instruct ISPs and mobile networks who operate in this country to block sites, with punitive punishments if these providers themselves fail to adhere to the rules.
Journalistic content should be exempted, though Reddit and Instagram have blurred the boundaries of who calls themselves a journalist, and where their content is published.
If someone vehemently objects to historic facts, satirical commentary or simply an opposing point of view, it becomes very difficult to determine if the content is harmful or not.
At least there shouldn’t be much disagreement about a national crackdown on online scams, child sexual exploitation and trolling.
Penalties for breaching the Bill’s regulations could include fines of ten per cent of a company’s annual turnover.
Is the Online Security Bill really necessary?
We’ve written previously about the arguments for and against online free speech, and it’s fair to say people’s views on this are pretty entrenched.
If you’re a victim of revenge porn or racist abuse, no counter-arguments will dissuade you from wanting such content eliminated.
Equally, advocates of free speech are horrified by proposals to effectively censor legal speech and content, viewing it as the thin end of a dangerous wedge.
There are certainly many ambiguities, which the draft legislation doesn’t satisfactorily resolve.
Who decides what qualifies as harmful, and who ensures these moving goalposts are repositioned as cultural and societal norms evolve?
It’s easy to justify banning terrorism-related content, but should descriptions of bulimia be banned? People should be able to discuss mental health issues, but what if it inspires copycat behaviour?
The Government describes the Online Security Bill as world-leading, yet it’s far from foolproof.