One restriction is the “one-strike” policy, which will forbid users who have violated its rules from using the Facebook Live service for set periods of time starting on their first offence. “For instance, someone who shares a link to a statement from a terrorist group with no context will now be immediately blocked from using Live for a set period of time,” Guy Rosen, vice-president of integrity, Facebook, wrote in a statement.
The announcement came shortly after global leaders, including New Zealand Prime Minister Jacinda Ardern, urged tech companies to sign a Christchurch Call pledge to limit the spread of extremist content online.
In a joint statement, Facebook, alongside Microsoft, Twitter, Google and Amazon, said: “The Christchurch Call announced today expands on the Global Internet Forum to Counter Terrorism (GIFCT), and builds on our other initiatives with government and civil society to prevent the dissemination of terrorist and violent extremist content.
To address the abuse of technology to spread terrorist and violent extremist content online, Facebook expressed its commitment to identifying checks on live-streaming. These include enhanced vetting measures — such as streamer ratings or scores, account activity, or validation processes — and moderation of certain live-streaming events where appropriate.
Facebook will also continue investing in technology that improves its capability to detect and remove extremist content online, including the extension or development of digital fingerprint and artificial intelligence (AI)-based technology solutions.
Image credits: https://www.flickr.com/photos/stockcatalog/41234217792/in/photostream/