US tech companies have declined to release data on the online spread of footage of last week’s shooting in Halle, Germany, despite pledging greater transparency as part of New Zealand Prime Minister Jacinda Ardern’s “Christchurch Call.”
Companies including Facebook and Twitter committed in May to take “transparent, specific measures” to prevent the amplification of violent content, after the killing of 51 people in Christchurch, New Zealand was livestreamed on Facebook.
Releasing the data would provide an indication of the impact of the new policies.
The companies introduced a new system for data sharing around major incidents and agreed to implement “regular and transparent public reporting, in a way that is measurable and supported by clear methodology.”
The killings of two people outside a synagogue in Halle by a gunman, which were livestreamed on Amazon.com’s gaming platform Twitch, was the first test of the new “Content Incident Protocol.”
The Global Internet Forum to Counter Terrorism, a group founded by Facebook, Twitter, Microsoft and Alphabet Google’s YouTube which facilitates the protocol, said on Thursday that content related to the Halle attack was “significantly less impactful online” than the Christchurch footage.
The group said companies shared hashes, or digital fingerprints, for 36 visually distinct videos linked to the attack, fewer than the 800 hashes Facebook said it shared with the group after the Christchurch shooting.
But the GIFCT and the companies declined to release data on how many people had seen the footage and how many of the videos were taken down automatically by their systems, key metrics in measuring the impact of content online.
After the Christchurch attack, Facebook said in a statement it had removed about 1.5 million videos of the attack globally, more than 1.2 million of which it blocked at upload.
Facebook and Microsoft declined to answer questions about how they decide when to disclose data around attacks.
Twitter spokesman Ian Plunkett said the company discloses data on “terrorist content removals” twice a year in a transparency report, adding: “We’ve nothing else to share.”
Google did not respond to requests for comment.
The Anti-Defamation League said last week the Halle footage appeared to have spread rapidly online after it was posted to white supremacist channels on messaging app Telegram.
“In order to assess the efficacy of the protocol, we urge the GIFCT and member companies to release data on the impact of the protocol on the spread of the video,” said Daniel Kelley, of the ADL’s Center for Technology and Society.
Twitch said after the Halle shooting that the footage had been viewed live by five people and then seen by 2,200 others before the company took it down.