Who should decide on online harmful content

Paul Matthew from ITP writes:

On one hand, it’s their network so their rules. On the other, these companies have grown to the point that they’re now a core part of the Internet itself – and their actions have a significant impact on everyone. They’re now in a position to – and arguably do – shape public opinion by either shadow- or outright- banning views they don’t approve of. 

Given the scale of these services, this means it’s now up to a very small group of extremely rich and powerful unelected men (mostly) to decide what is acceptable speech and content for Society as a whole, above and beyond that deemed in law.

Surely we all have to agree that that’s a problem.

And we’re not just talking about speech. For example, these services have the proven ability to massively influence election outcomes and much more. At the other end of the spectrum, their algorithms can actively push content to you that research [pdf] suggests doesn’t just shape opinion, but that can fully radicalise large groups of people. 

So what’s the answer?

It surely has to be recognising that once a “service” reaches a large enough scale, their obligation has to at least partially expand from just their shareholders to society as a whole. And looked at in this context, you could certainly argue that this might include obligations around not censoring content and users except where content is explicitly illegal – as doing so infringes their right to free speech (remember, we’re thinking societal obligations here).

I think this will be one of the huge issues of the decade. Being banned from Twitter is not the same as being kicked off a small blog such as Kiwiblog. When an platform reaches giant global status, their decisions have profound impacts on free speech.

Comments (88)

Login to comment or vote

Add a Comment