You Can't Count On Bots To Do A Person's Job
Algorithms are very much not brains.
YouTube, in trying to police content, polices content that should not be policed -- content that the world is worse for not having out there.
Matthew Ingram writes at CJR that YouTube takedowns are making it hard to document war crimes:
LIKE EVERY OTHER LARGE social platform, YouTube has come under fire for not doing enough to remove videos that contain hate speech and disinformation. The Google-owned company has said repeatedly that it is trying to get better at doing so. But in some cases, removing videos because they contain graphic imagery of violence can be a bad thing. That's the case that Syrian human-rights activist and video archivist Hadi Al Khatib makes in a video that the New York Times published on Wednesday in its Opinion section. Khatib co-produced the clip with Dia Kayyali, who works for Witness, an organization that helps people use digital tools to document human rights violations. In the video, Khatib notes that videos of bombings the Syrian government has carried out on its own people--including attacks with barrel bombs, which Human Rights Watch and other groups consider to be a war crime--are important evidence, but that YouTube has removed more than 200,000 such videos."I'm pleading for YouTube and other companies to stop this censorship," Khatib says in the piece. "All these takedowns amount to erasing history." There are similar policies at Facebook and Twitter, both of which have also removed videos that included evidence of government attacks in Syria and elsewhere because they were flagged as being violent or propaganda. The problem, Kayyali says, is that most of the large social platforms use artificial intelligence to detect and remove content, but an automated filter can't tell the difference between ISIS propaganda and a video documenting government atrocities. Many of the platforms have been placing even more emphasis on using automated filters because they are under increasing pressure from governments in the US and elsewhere to act more quickly when removing content. Facebook CEO Mark Zuckerberg has bragged that automated systems take down more than 90 percent of the terrorism-related content posted to the service before it is ever flagged by a human being.
This reminds me of how McKay Smith, who does moving tweet threads documenting The Holocaust and memorializing the dead, periodically has his account threatened.
How can we remember the atrocities (and maybe avoid repeating them) if they are constantly erased from the social media platforms that are increasingly where Americans and people around the globe get their information?
While these companies can do what they want, there's an argument to be made that they have a social and moral responsibility to do what it takes, as enormous a job as that must be, to see that AI doesn't broad-brush non-terrorist (etc.) off the sites.
This may require spending some big bucks on human employees.
What makes you think humans would do a better job? After all, the bots are programmed by humans.
The problem is not so much that we'll forget the atrocities than that we'll trivialize them. Comparing political enemies with Nazis, for example, trivializes the evil that Nazis did. Same thing with comparing climate change denial to Holocaust denial.
Conan the Grammarian at October 26, 2019 5:59 AM
"How can we remember the atrocities ..."
You are acting like erasing the atrocities and white washing history isn't part of the objective.
There is a very simple solution here. One already in law but judges don't like using it. If Google wants to police the content on their system they are a publisher with all the rights and responsibilities involved. And they are responsible for any legally improper content. Or they police nothing and are a platform only. They shouldn't get to have it both ways.
Ben at October 26, 2019 7:16 AM
Ben is right. currently they get to have it both ways.
Can you tell what is gratuitous violence, what is propaganda, and what is "the truth"? No not always. We have to trust people by giving them the chance to check facts, to have access to information. The Left does not trust people to come to the "right" conclusions. They have always employed censorship (N.Korea, Russia, Cuba).
Note that there is also censorship by simply failing to report: witness how little coverage the Hong Kong protests have gotten even though it is a very big deal.
cc at October 26, 2019 7:33 AM
I'm surprised that the US Government doesn't fund a site (off-book) that willingly hosts ISIS videos. Just to data mine the posters and their viewing audience. And try to upload malware on their computers.
Ben gets right to the point, and in a way that doesn't attack property rights: you're either a publisher with editorial control, or you're not and you have a safe harbor protection.
Right now, they have it both ways. YouTube can drop a Prager U video and claim it was the "algorithm" instead of having to own up and say "we find Prager to be odious and do not want his content".
I R A Darth Aggie at October 26, 2019 7:39 AM
This to consider.
Crid at October 26, 2019 10:55 AM
Facebook and Twitter etc. would be free to do what they want if they were still private companies. But they aren't. They have joined with the State against us. They have become as much a part of the State as the CIA or other government mafia gangs.
Kent McManigal at October 26, 2019 2:28 PM
How long will it take for this to be banned?
"Red Flag" "laws" are already confiscating the legally-possessed property of Americans based on rumor and innuendo.
It's a VERY small leap to the idea that the public should only know what comforts them.
Radwaste at October 26, 2019 8:17 PM
I sympathize with Kent's resentment, but every bank in America has been similarly compromised by KYC requirements.
Crid at October 27, 2019 8:31 AM
Leave a comment