Facebook Blames Their Users For Facebook’s Failure to Stop the New Zealand Mosque Shooter’s Facebook Livestream Video

Wow! Facebook is actually blaming the users of Facebook for not stopping the 17 minute Facebook Livestream video filmed by the New Zealand mosque shooter as he shot and killed 49 people in 2 mosques, and then allowing the mosque shooter video to be shared millions of times on their platform. You really can’t make this shit up.

Facebook Blames Their Users For Facebook's Failure to Stop the New Zealand Mosque Shooter's Facebook Livestream Video

Facebook Blames Their Users For Facebook’s Failure to Stop the New Zealand Mosque Shooter’s Facebook Livestream Video

Facebook has previously said that in the first 24 hours after the massacre, it removed 1.5 million videos of the attacks, “of which over 1.2 million were blocked at upload,” implying 300,000 copies successfully made it on to the site before being taken down.

Facebook claims that it couldn’t stop the New Zealand mosque shooter’s Facebook Livestream video because Facebook users didn’t report it early enough.

“No users reported the [New Zealand mosque shooter] video during the live broadcast” – Facebook’s excuse for allowing the New Zealand mosque shooter’s Livestream video to air, and then allowed to be shared with millions more people afterwards

What kind of bullshit excuse is that, Facebook?

So it’s up to Facebook users to police the content that Facebook allows to be transmitted over their platform!?
I don’t think that’s how it works! Facebook is responsible for policing Facebook’s traffic, and blaming the users of Facebook is a cop-out and a disgrace.

Facebook needs to take responsibility for all of the disgusting shit that they allow to be spread on their platform, and not rely on Facebook users to police other Facebook users, not give us a bunch of bullshit lies and excuses.

Why did Facebook air live video of the New Zealand mosque shooting for 17 minutes? Didn’t anyone alert the company while it was happening?

Facebook says no. According to its deputy general counsel, Chris Sonderby, none of the 200 or so people who watched the live video flagged it to moderators. In a Tuesday blog post , Sonderby said the first user report didn’t come until 12 minutes after the broadcast ended.

All of which raises additional questions — among them, why so many people watched without saying anything, whether Facebook relies too much on outsiders and machines to report trouble, and whether users and law enforcement officials even know how to reach Facebook with concerns about what they’re seeing on the service.

“When we see things through our phones, we imagine that they are like a television show,” said Siva Vaidhyanathan, a professor of media studies at the University of Virginia. “They are at a distance, and we have no power.”

Facebook said it removed the video “within minutes” of being notified by New Zealand police. But since then, Facebook and other social media companies have had to contend with copies posted by others.

The shooting suspect carefully modeled his attack for an internet age, as he live-streamed the killing of 50 people at two mosques in Christchurch, New Zealand.

Tim Cigelske, who teaches about social media at Marquette University in Milwaukee, said that while viewers have the same moral obligations to help as a bystander does in the physical world, people don’t necessarily know what to do.

“It’s like calling 911 in an emergency,” he said. “We had to train people and make it easy for them. You have to train people in a new way if you see an emergency happening not in person but online.”

To report live video, a user must know to click on a small set of three gray dots on the right side of the post. A user who clicks on “report live video” gets a choice of objectionable content types to select from, including violence, bullying and harassment. Users are also told to contact law enforcement if someone is in immediate danger.

Facebook also doesn’t appear to post any public information instructing law enforcement how to report dangerous or criminal video. The company does have a page titled “information for law enforcement authorities ,” but it merely outlines procedures for making legal requests for user account records. Facebook didn’t immediately respond to a request for comment and questions about its communications with police.

Facebook uses artificial intelligence to detect objectionable material, while relying on the public to flag content that violates its standards. Those reports are then sent to human reviewers, the company said in a November video .

The video also outlined how it uses “computer vision” to detect 97 percent of graphic violence before anyone reports it. However, it’s less clear how these systems apply to Facebook’s live streaming.

Experts say live video poses unique challenges, and complaints about live streaming suicides, murders and beatings regularly come up. Nonetheless, they say Facebook cannot deflect responsibility.

“If they cannot handle the responsibility, then it’s their fault for continuing to provide that service,” said Mary Anne Franks, a law professor at the University of Miami.

She calls it “incredibly offensive and inappropriate” to pin responsibility on users subjected to traumatic video.

Stand Up To Government Corruption and Hypocrisy – usbacklash.org