For the first time in human history, we can measure many of these things with exciting precision. All this data exists and companies are constantly evaluating, what are the effects of our rules? Each time they establish a rule, they test its effects and its applicability. The problem is, of course, that everything is locked. No one has access to it except the people of Silicon Valley. So it’s super exciting but also super frustrating.
This ties in with perhaps the most interesting thing for me in your article, which is the concept of probabilistic thinking. Much of the coverage and discussion of content moderation centers around trivia, as humans are used to. Like, “This piece of content, Facebook said it wasn’t allowed, but it’s been viewed 20,000 times.” One point you make in the article is that perfect moderation of content is impossible on a large scale unless you simply ban anything that no one wants. You have to accept that there will be an error rate. And each choice is about which direction you want the error rate to go: do you want more false positives or more false negatives?
The problem is, if Facebook comes out and says, “Oh, I know that sounds bad, but we actually got rid of 90% of the bad stuff,” that doesn’t really satisfy anyone, and I think one of the reasons is is that we are just stuck taking the words of these companies for it.
Totally. We have no idea at all. We are left at the mercy of that kind of statement in a blog post.
But there is a grain of truth. For example, Mark Zuckerberg has this line that he deploys all the time now in every testimony and Congressional interview. It’s like the police don’t solve all the crimes, you can’t have a city without crime, you can’t expect some kind of perfect enforcement. And there is a grain of truth there. The idea that moderation of content will be able to impose order on all the mess of human expression is a pipe dream, and there is something quite frustrating, unrealistic and unproductive about the constant stories we read in the internet. Press, Here is an example of an error, or set of errors, of this rule not being fully enforced.
Because the only way to get the rules fully enforced would be to simply ban anything that looks something like this from a distance. And then we’d have onions getting slaughtered because they looked like breasts, or whatever. Maybe some people aren’t that worried about free speech for onions, but there are other worse examples.
No, as someone who watches a lot of cooking videos –
It would be a high cost to pay, wouldn’t it?
I watch a lot more pictures of onions than breasts online so this would hit me really hard.
Yeah, exactly, so the onion free speech caucus is strong.
We have to accept mistakes one way or another. The example I use in my article is therefore in the context of the pandemic. I think it’s very helpful, because it makes it really clear. At the start of the pandemic, platforms had to send their workers home like everyone else, forcing them to increase their dependence on machines. They didn’t have as many humans doing checks. And for the first time, they were very candid about the effects of that, namely, “Hey, we’re going to make more mistakes.” Normally they come out and say, “Our machines, they’re so great, they’re magic, they’re going to clean it up.” And then, for the first time, they were like, “By the way, we’re going to make more mistakes in the context of the pandemic.” But the pandemic allowed them to say that, because everyone was saying, “Alright, make mistakes! We have to get rid of this stuff. “And so they got it wrong on the more false positives side by removing the misinformation, because the social cost of not using the machines at all was way too high and they couldn’t rely on humans.
In this context, we accepted the error rate. We read in the press about how, for example, back in the days when masks were bad, and they banned mask ads, their machines accidentally over-applied that and also destroyed a bunch of makers of volunteer masks, because the machines were like, “Bad masks; shoot them down. And it’s like, OK, it’s not ideal, but at the same time, what choice do you want them to make there? On a large scale, where there are literally billions of decisions, all the time there are costs, and we were freaking out about hidden ads, so I think that’s a more reasonable compromise to make.