Some might argue that’s an approach worth considering. It would be a lot easier for tech companies to take a blunt force approach and ban every clip of the shooting from being posted, perhaps using the fingerprinting technology used to remove child pornography. This puts the company in the tricky position of having to decide which videos are, in fact, newsworthy. The Google representative added, however, that videos of the shooting that have news value will remain up. We will continue working directly with New Zealand Police as their response and investigation continues.” We're also removing any praise or support for the crime and the shooter or shooters as soon as we’re aware. "New Zealand Police alerted us to a video on Facebook shortly after the livestream commenced and we quickly removed both the shooter’s Facebook and Instagram accounts and the video. “Our hearts go out to the victims, their families, and the community affected by this horrendous act," Facebook's spokesperson said in an earlier statement. Facebook says when it finds this content coming from links to other platforms, it's sharing the information with those companies. In order to catch videos that have been altered to evade detection-for instance, videos of the footage playing on a second screen-Facebook is deploying the same AI it uses to spot blood and gore, as well as audio detection technology. This means that the original video has been hashed, so that other, similar videos can't be shared again. We urge people to report all instances to us so our systems can block the video from being shared again.” "We are adding each video we find to an internal database which enables us to detect and automatically remove copies of the videos when uploaded again. “Since the attack happened, teams from across Facebook have been working around the clock to respond to reports and block content, proactively identify content which violates our standards and to support first responders and law enforcement," a spokesperson said. The company didn’t initially respond to WIRED’s queries about this or to questions about how Facebook distinguishes between newsworthy content and gratuitous graphic violence.Īfter this story published, Facebook sent further explanation about how it's handling videos of this shooting. It’s unclear why the Christchurch video was able to play for 17 minutes, or even whether that constitutes a short time frame for Facebook. All of that is in addition to AI tools that detect more prosaic issues, like copyright infringement. What's more, Facebook and others have machine learning technology that has been trained to spot new troubling content, such as a beheading or a video with an ISIS flag. These programs generate digital signatures known as hashes for images and videos known to be problematic to prevent them from being uploaded again. These companies also have invested in technology to spot extremist posts, banding together under a group called the Global Internet Forum to Counter Terrorism to share their repositories of known terrorist content. Google has developed its own open source version of that tool. Facebook uses PhotoDNA, a tool developed by Microsoft, to spot known child pornography images and video. The answer may be a disappointingly simple one: It’s a lot harder than it sounds.įor years now, both Facebook and Google have been developing and implementing automated tools that can detect and remove photos, videos, and text that violate their policies. The Christchurch massacre has people wondering why, after all this time, tech companies still haven’t figured out a way to stop these videos from spreading. It’s also been almost three years since footage of a mass shooting in Dallas also went viral. This isn't the first time we’ve seen this pattern play out: It’s been nearly four years since two news reporters were shot and killed on camera in Virginia, with the killer’s first-person video spreading on Facebook and Twitter. Many hours after the shooting began, various versions of the video were readily searchable on YouTube using basic keywords, like the shooter’s name. News organizations as well started airing some of the footage as they reported on the destruction that took place.īy the time Silicon Valley executives woke up Friday morning, tech giants’ algorithms and international content moderating armies were already scrambling to contain the damage-and not very successfully. Almost immediately, people copied and reposted versions of the video across the internet, including on Reddit, Twitter, and YouTube. The shooter apparently seeded warnings on Twitter and 8chan before livestreaming the rampage on Facebook for 17 gut-wrenching minutes. At least 49 people were murdered Friday at two mosques in Christchurch, New Zealand, in an attack that followed a grim playbook for terrorism in the social media era.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |