The NZ Mosque Shooting Has Exposed A Dangerous Hole In Facebook, Twitter & YouTube With No Fix
In the aftermath of the Christchurch shooting in New Zealand, social networks went into overdrive, desperately trying to scrub the shooter¡¯s live video and manifesto off their platforms. And as a result, their inadequacies have come to the fore.
In the aftermath of the Christchurch shooting in New Zealand, social networks went into overdrive, desperately trying to scrub the shooter's live video and manifesto off their platforms.
And as a result, their inadequacies with content moderation have come to the fore.
Images courtesy: Reuters
Facebook says that, in just the first 24 hours after the massacre, it removed 1.5 million instances of the body cam footage of the attack. At least 1.2 million of those instances were stopped automatically by Facebook's upload filter.
Unfortunately, that means about 300,000 versions of the vile video snuck through Facebook's precautions. Likely the only reason their moderators even caught them afterwards is because the teams are on high alert following the shooting.
Facebook is receiving the brunt of the criticism, seeing as it was where the killer published the live video. During the livestream, the company says the video got only about as 200 views, reaching a grand total of 4,000 views by the time it was pulled down.
So why was the video allowed to remain online for so long? Did Facebook not see the horror taking place? Well actually...yeah.
While Facebook has auto-detection to stop a lot of uploads on its platform, including revenge porn, it didn't have a way to automatically detect what was in the shooter's video. And then, it's content moderation teams missed it too.
A concerned viewer eventually made a complaint about it to Facebook of course.....29 minutes after the livestream had begun. By that point, Facebook had already flagged and removed the video 12 minutes before, after it had streamed to rabid viewers for 17 minutes straight.
And just so you know, New Zealand Police spotted it first and had to alert the social media company about the video on their own platform.
Through all of this, Facebook was obviously not fast enough to stem the tide of racists and trolls eager to get their hands on the footage. In the melee, an 8chan user managed to download and share the video to a third party site. After that, it was out in the open.
The video began being posted to Twitter, where everyday users already clued into the situation were attempting to manually flag and report it. Just a couple of hours of searching could net you 20 tweets with the video. CNET talked to one woman who, for two days after, found 200 instances of the video on the site.
In fact, you can STILL find tweets online with links to third-party video sites hosting the footag.
My uncle sent me the video of the Christchurch shooting on whatapp and I just want to puke. How can a man be so evil and in his mind think he is doing right. This is the type of scum trump called very fine people. This is the fucking culture this far-right nationalism is spawning
¡ª Suburban Dude (@picassoi_) March 15, 2019
i just fucking accidentally saw the video of the christchurch shooting and im fucking shaking
¡ª hasinthi ?? (@hasinthiherath) March 15, 2019
DONT SHARE THE VIDEO U FUCK FACES I STG
YouTube wasn't any better either. The company reported taking down tens of thousands of versions of the video in the aftermath of the attack, as well as banning a number of accounts made to glorify the shooter. The company called the volume of videos about the shooting in that first 24 hours "unprecedented". They said it was so fast a new version of the shooting video was being uploaded each second.
Automatically rejecting footage of the violence, temporarily suspending the ability to sort or filter searches by upload date (which limited the ability to discover and view violative content while our teams worked to remove it), and...
¡ª YouTubeInsider (@YouTubeInsider) March 18, 2019
The video, obviously, also made it to Reddit, to a subreddit called r/watchpeopledie. The controversial section has come under fire before for hosting video of people's deaths in accidents, murders, gang violence, and the like. They maintain it's not a celebration of the violence but a way to embrace the reality of death in the world so as to not be broken by it. They hosted the video of the shooting, and then locked it online until when incensed outsiders demanded it be taken down, saying it broke no laws. Reddit instead just banned the whole subreddit.
The glaring problem in social media
But this is exactly the problem with social media. It's for similar reasons it's proving such an issue where fake news is concerned. Social networks like Facebook and Twitter have been chasing expansion for so long, they've grown beyond their wildest dreams. Now, anyone, anywhere in the world can chat with people anywhere else. They can all share news and opinions in real time.
Sadly, that means they can also share their racism, hate, and violent extremist views. And with how big social media is grown, it's impossible to moderate all that content with human teams. Back in 2017, Facebook had just 4,500 of these moderators, until backlash on the type of content it was online online grew that number to 15,000.
That's 15,000 moderators globally for 2.3 billion Facebook users. That's one moderator for over 1.5 lakh users. Obviously, that's not enough. And even worse, those moderators also tell heart-breaking stories of the content they see driving them to depression, suicide, chemical addiction, anxiety, PTSD, and more.
But there's just no way to have the job be done automatically. Facebook, and indeed no one else, has the AI technology to let a system automatically take down violent or exploitative videos with full accuracy. YouTube uses a content ID system to automatically take down copyrighted content but that has some caveats. It only works for stuff that's been flagged at least once, and then it tends to fail if you edit the video a bit, like adding a border frame or speeding it up slightly.
So maybe we're stuck in that loop for a bit. But if these companies genuinely want to do something to help the situation, they need to have quicker responses to user complaints. A person shouldn't have to wait days for Facebook to respond about their complaint that someone is imitating them online. A woman shouldn't have to sit on her hands until Twitter can get around to banning her harasser from the platform, if they miraculously choose to do that at all.
But even more than user complaints, nothing works so well as de-platforming hate groups and their ilk. When payment portals, web hosting services, and domain registrars ban proponents of hate speech, it goes a long way towards silencing that message of violence.
If we want to stop the hate spreading, as New Zealand Prime Minister Jacinda Ardern demands, social networks need to be forced out of their profit-seeking ways and held accountable. We need to have stricter laws regarding things like hate speech online, and we need to enforce them in a stronger fashion than ever before.