Tech and social media giants such as Facebook, Google, YouTube and Twitter have been plagued with illegal content for years: erotic content, copyright violations and videos that go against their stated terms of services. On March 15, Facebook was unable to detect and suspend the livestream of the massacres at two mosques in Christchurch, New Zealand. The 17-minute gut-wrenching video went viral, and the authorities of New Zealand have been calling upon the tech giants to clamp down on sharing of the violent content.
According to Vice’s Motherboard, Facebook hires content moderators to review flagged videos to identify within five minutes warning signs, such as display of weapons or cries for help. Despite having the standard procedures set in place, more than 800 digital fingerprints of the NZ mass shooting were registered.
Can technology be a better content moderator than human?
Technology may not understand the nuances of human emotions, but they are able to process massive amount of data quickly without prejudice.
For instance, Hany Farid, a computer science professor from Dartmouth College, United States, has combatted child pornography with video hashing. It works by breaking down a video into key frames and labelling each frame with a different alphanumerical signature. Subsequently, the hash creates a dataset and every video that are uploaded will be match against it. Thus, the method allows the detection of the repost and prevent further re-uploads.
In view of Facebook’s current protocols, Faridtold CNN: “It’s been proven to work in the child abuse space, the copyright infringement space and now in the extremism space — so there are no more excuses.
“You can’t pretend that you don’t have technology. The decision not to do this is a question of will and policy — not a question of technology.”
As more and more viewers rely on social media for news and information, the social media giants ought to review their role as a broadcaster — and maintain social discipline and order.