Home BUSINESS News Buffalo Shooting Tests Internet Antiterrorism Accord

Buffalo Shooting Tests Internet Antiterrorism Accord

0
Buffalo Shooting Tests Internet Antiterrorism Accord

[ad_1]

WELLINGTON, New Zealand—The live-streaming of a shooting rampage at a supermarket in Buffalo, N.Y., demonstrated the strengths and limitations of a global accord to counter the spread of terrorist content online.

The Christchurch Call’s initial goal was to forge a coordinated response by social-media companies when violent, extremist videos are broadcast on their platforms. A Meta spokesman said that the company has been taking its own measures, including by investing in detection technology, and working with the Christchurch Call and other groups.

Still, the weekend attack in Buffalo—broadcast on Twitch, a site owned by

Amazon.com Inc.


AMZN 0.25%

that specializes in live streaming—shows that tech companies remain far from being able to fireproof their platforms. Twitch said it removed the stream less than two minutes after the shooting began, faster than platforms have responded in the past. But versions of the video could still be found on Facebook, Twitter and YouTube more than a day later.

“The Buffalo shootings will undoubtedly provide further impetus” to what the Christchurch Call is trying to achieve, said

Paul Ash,

coordinator of the accord for New Zealand’s government. “As a community, we will analyze this tragic event and use what we have learned to further strengthen our crisis-response measures.”

One Christchurch Call success was getting tech companies to use the Global Internet Forum to Counter Terrorism as a central point for sharing digital fingerprints—known as hashes—of attack videos and images, said Prof.

Dave Parry,

head of information technology at Australia’s Murdoch University.

The forum, founded by tech companies in 2017 for technical collaboration, said that its members added about 870 distinct pieces of content—130 versions of the live-stream video and 740 images—to the hash-sharing database within a day of the Buffalo shooting. The hashes allow algorithms to identify the problematic content as it is uploaded.

Policing the internet is a herculean task, given the amount of video uploaded and users’ ability to manipulate files to make them harder for algorithms to detect. Advances in artificial intelligence may not be enough to meet the challenge.

“There is an awful lot of combat-type footage on the web including videogame sequences, and often the attack videos are low-quality anyway, which makes an intelligent approach of ’understand then block’ much harder,” Mr. Parry said.

Tech companies are called on to walk a fine line with online content—restricting violent extremism without undermining free expression. Meta says it has banned more than 250 white-supremacist organizations globally, and removed nearly 900 militarized social movements from its platform.

Governments also identify items they deem to contain terrorist and violent extremist content. In Australia, a Christchurch Call member country, government officials said they referred nearly 6,000 items to platforms between the start of 2020 and mid-March of this year, of which more than 4,200 were removed.

“Crisis response is but one part,” said Mr. Ash, describing the Christchurch Call as a work in progress.

Areas that need further effort include understanding the role of algorithms in exposing people to violent extremism and designing interventions, he said.

“Clearly the task of preventing this content from being produced and promulgated remains a key priority,” Mr. Ash said. The Christchurch Call leaders will be taking stock of progress later this year.

The launching ceremony of the Christchurch Call in Paris three years ago.



Photo:

charles platiau/pool/Shutterstock

Aliya Danzeisen,

national coordinator of the Islamic Women’s Council of New Zealand, part of the Call’s advisory panel, said the Call had been expected to look beyond its immediate focus on stopping the spread of the Christchurch video to tackle the reasons why hatred based on factors such as religion and race flourishes online.

“It’s just kind of stopped,” Ms. Danzeisen said.

Governments, she said, need to end the anonymity and impunity that social media affords extremists and the lack of legal consequences for Internet companies over the content they allow.

But others say social-media companies already go too far in moderating content and policing speech.

More than 60 civil-society organizations including Amnesty International, Reporters Without Borders and the Electronic Frontier Foundation opposed legislation adopted by the European Union parliament in April 2021 that requires online platforms to remove terrorist content within an hour of its being flagged. Pointing to its transnational basis and the lack of judicial oversight, they said the law was open to abuse by governments and could encourage excessive censorship and threaten freedom of expression.

Any global efforts to go beyond the current approach of reactively taking down extremist content face challenges that might be insurmountable, said Murdoch University’s Mr. Parry. Social media is international but regulation is local and subject to domestic interests. There is no agreement among countries on a common set of tools or rules for regulating content.

Making algorithms more random to reduce the echo-chamber effect that pushes content to users based on their viewing preferences also could face resistance from advertisers and social-media companies.

Write to Stephen Wright at stephen.wright@wsj.com

Copyright ©2022 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8

[ad_2]

Source link