When a soldier in Thailand killed 29 people and injured more than 50 others last weekend, his bloody rampage was reportedly broadcast live to Facebook for almost five hours before it was taken down.
The attack happened almost a year after the Christchurch shooter livestreamed 72 minutes of his attack on two mosques that left 51 people dead and 50 injured.
The latest incident has revived questions about who should be responsible for removing harmful content from the internet: the networks that host the content, the companies that protect those networks, or governments of the countries where the content is viewed.
Australia’s communications minister, Paul Fletcher, wrote in an opinion piece this week that it was “frankly pretty surprising that a government needs to request that measures be in place to protect against the livestreaming of murder”.
Australia is preparing to introduce an online safety act, which will create rules around terrorist-related material, as well as cyber-abuse, image-based abuse and other kinds of harmful content.
But while the question of whether to take down a livestream of murder is an obvious one, decisions about other kinds of take-down requests can be fraught.
“Some of those requests are kind of scary,” says John Graham-Cumming, the chief technology officer of US web security company Cloudflare. “In Spain you have Catalonia trying to be independent, and the Spanish government saying ‘that is sedition, can you remove it?’”
After the Christchurch shooting, Australia quickly passed laws in that could result in company executives being jailed for three years, and the companies fined up to 10% of global revenue, for failing to quickly remove material when alerted by the eSafety commissioner.
In the UK, the government will appoint Ofcom to issue fines to social media companies that fail to remove harmful content.
The online safety act the Australian government is consulting on will give the e-safety commissioner the power to:
direct internet providers to block domains containing terrorism material “in an online crisis event”
ask search engine providers to de-rank websites that provide access to harmful material
force sites to remove cyber abuse or image-based abuse of adults within 24 hours
It will also allow the minister to set via legislative instrument a set of online safety expectations social media companies will need to comply with.
While this will makes things clearer for tech companies, it doesn’t spell the end of their headaches.
As Graham-Cumming points out, when one government has a law in place, then other governments can make similar demands.
“If the law in Australia says we have to hand over all our [encryption] keys then, for example, China or Saudi Arabia or Russia or Brazil or India or Germany could say ‘well you did it for Australia, how are we different from Australia?’” he said.
“There is this tension between this sense of global internet, and then local policing.”
Graham-Cumming says the world is still getting to grips what role tech companies should play in determining what should be allowed online.
“We are in the middle of this massive change in the world where everything has gone online – good and bad – and as a society and as governments [we] don’t yet know what the answer is,” he says.
“I think what has happened is some quarter of the public is saying to technology companies: ‘you decide for me’. And that’s an unusual situation where private companies are being asked to make public policy like that.”