Social Media can’t do the impossible

The Herald reports:

Some 39 days after the Christchurch mosque massacres, social media sites are still struggling to stamp out copies of the gunman’s video.
New York-based researcher Eric Feinberg reported he had found another 12 copies across Facebook (which was hosting five copies), Facebook-owned Instagram (six) and Google-owned YouTube (four). All were live as of Monday NZT.

This is not surprising. Social media works by allowing people to upload content without pre-approval. If a moderator had to pre-approve every post on Facebook, Instagram and YouTube, they would die off.

Presented with earlier links uncovered by Feinberg, a Facebook spokesman said there was an ongoing effort to eliminate all copies of the alleged gunman’s video.

On April 11, a US Congressional hearing was told the clip was not gory enough to trigger Facebook’s automated filters.

“You can train AI [artificial intelligence] to effectively look for things like nudity using data like skin tone – i.e. a high level of skin tone in a particular piece of content make flag it for human review and actioning,” a Facebook insider said.
“[But] in graphic violence content, we do use data around things like tones and colours that may identify or imply violent content, however, the AI did not identify this in the Christchurch video.”
Facebook has turned its attention to audio – and specifically, sounds that could be gunshots – in its ongoing attempts to recognise terror content in its realtime
videostreaming service Facebook Live

There is a limit to what AI can do. I’m sure Facebook would love to have a tool that could identify segments of the video and delete it automatically with no false positives.

Council for Civil Liberties chairman and Tech Liberty founder Thomas Beagle has long been wary of the potential of “protective” laws to be used for political censorship or to otherwise undermine free speech.
Beagle says that fringe sites, usually hosted offshore, will ignore any social media crackdown law, while the answer for Facebook and other platforms is for them to develop better systems to filter or block content that violates their terms.

I’m very wary of any change to our domestic laws. The solutions are better technology but we need to realise there is still a limit to what can be done if people are determined to share something.

The original livestream video was seen by under 200 people and deleted quickly by Facebook. Google removed a youtube version within minutes also. They actually did response quickly.

But the problem is users kept uploading versions of it. This was a human problem. It wasn’t Facebook and Google uploading them.

The videos were in breach of terms and conditions of all major social media companies. So there is no dispute they shouldn’t be on the Internet. But again the issue is how do you stop users uploading content when there are billions of uploads a day?

Comments (68)

Login to comment or vote

Add a Comment