Facebook explains why its A.I. didn’t detect the New Zealand mosque shooting video before it was viewed 4,000 times
March 21, 2019769 views0 comments
Facebook explained why its artificial intelligence tools failed to detect the video of the New Zealand mosque shooting live streamed on its site last week before being viewed 4,000 times. A suspected gunman killed 50 people in an attack on two mosques in the area.
The video was removed by Facebook after being flagged for the first time by a user 29 minutes after the stream began, the company said in a blog post Wednesday night. Several social media platforms removed the original video from their sites, but quickly saw copies pop up at a clip with which their moderation systems couldn’t keep up. Users also altered the video to slow down automatic detection.
Facebook has relied on a mix of AI and human review to assess and remove content that violates its policies, and has largely seen success when it comes to removing porn and terrorist propaganda from its site. But Facebook said in the post that training AI to detect mass shooting videos is more challenging than training it to detect nudity because it relies on a vast amount of content to learn from. On Tuesday, a congressman asked Facebook CEO Mark Zuckerberg and other tech leaders to brief lawmakers on how the New Zealand video spread while other terrorist content has been largely removed.
”This particular video did not trigger our automatic detection systems,” Facebook wrote. “To achieve that we will need to provide our systems with large volumes of data of this specific kind of content, something which is difficult as these events are thankfully rare. Another challenge is to automatically discern this content from visually similar, innocuous content – for example if thousands of videos from live-streamed video games are flagged by our systems, our reviewers could miss the important real-world videos where we could alert first responders to get help on the ground.”
Facebook said it will take steps to beef up its detection technology. The company said it used an “experimental audio-based technology which we had been building to identify variants of the video.” It also said it will explore whether its AI can be used in live streamed videos.
Facebook said will also work to more quickly review live streamed videos, which it has done for videos reported for people who film suicide. The company will expand its categories for accelerated review to include a video like the one from New Zealand.
One strategy Facebook said would not be an effective solution is adding a time delay to live videos. Facebook said the sheer volume of daily broadcasts means this strategy would not get to the core of the problem and that this would only further delay user reports that help it detect harmful content or report criminal activity to the police.