AI moderation of social media is doomed

Social media have a moderation problem. They’ve had that since their inception, but we’re only now starting to see the full scope of the problem. So, some thoughts on the problem.

Disinformation

Two weeks ago, when I posted a rant about my experiences during the pandemic, Facebook reminded me to seek Covid information from proper sources. Fair enough. But I hope there is a difference between what I posted and somebody exclaiming vaccines are lies by a cabal of blood-drinking pedophiles to inject us with mind-controlling chips — hint: that last is utter BS.

So, what is going on here? Well, Facebook has a bad reputation when it comes to moderating its platform. Their platform has influenced elections and Brexit, and was possibly instrumental in genocide. Now, let’s not single out Facebook here. YouTube and Twitter and other social media are not much better off.

Looking at just the pandemic, it was always going to be fuel for conspiracy theories. It got worse when President Trump decided to politicize it. I personally think he decided Covid would go away by the summer of 2020, meaning that if he politicized it he could easily claim it as a big win later. Instead of admitting the error of that later, he doubled down. Other populist groups and authoritarian regimes saw the potential, and here we are.

However, the pandemic has also brought to light that we have a huge moderation problem.

Moderation, freedom, and bubbles

Our social media are a product of our free society. Of course, freedom is a good thing. However, like I’ve argued before: equality is really more important than freedom. And that is where the problems start. Just to be clear: censorship is bad. Everybody should be allowed to say what they want, and if people want to believe idiotic things, that should be their right. Up to a point.

When lies push half the population to vote for a would-be tyrant, that causes problems for everybody. When they refuse vaccines, causing the old and the immunity-compromised around them to die, then we come back to equality territory.

Social media aggravate this situation by their business model. They sell adds. Adds need to be seen, so people need to engage as much with the social media platform as possible. Addiction is the business model. And you know what really engages and addicts? Outrageous news. And because there is no censorship, that outrageous news can be from anybody. It can be from real journalists, or a random guy from the other side of the world making it up.

Social media also lead to echo chambers and so-called media bubbles. Facebook prioritized content you engage with, which is often content by like-minded users. You believe conspiracy theories? Good for you, here’s a thousand YouTube videos about that. Those echo chambers are outrageous enough to drive engagement, so they are automatically amplified and strengthened.

Populists thrive on creating their own reality with ‘outsiders’ to fight, and ‘insiders’ that only the messianic populist leader can save from the outsiders. This distortion of reality works really well in media bubbles, where you can drown followers in your nonsense and weaponize them.

Finally, there are bad actors actively trying to bring down the system with disinformation campaigns. And by bad actors I mean actual state-run armies of trolls.

Some people see social media as the great equalizer, but make no mistake, social media are a war zone, and unscrupulous governments are putting armies on the field.

Moderation comes

Even Google, Facebook, and Twitter realize that they have a problem. Not because they’re losing money. No, because authorities have gotten wise, and are trying to put a stop to the problem. And that will lose them money – as demonstrated by the plunge Facebook stock recently took.

Unfortunately, untying this knot is problem. A lot of people have a vested interests in keeping the disinformation machine going. People like Trump in the US, and the entire Republican party behind him. Other right-wing populists. But also grifters who make a good buck with fake news. And don’t underestimate the state actors, who can easily amplify calls for ‘freedom’ and to ‘stop censorship’.

Then there are the companies themselves. They have a vested interest in not moderating their content. Remember, outrageous content engages, and engagement makes money. So social media companies really don’t want to do moderation if they can help it, and like to frame the problem as something that the users themselves are responsible for, or maybe the government.

Interestingly enough, a large force supporting moderation is the content industry. The music, movie, and gaming industry are not happy that their work is pirated on YouTube or shown on Twitch, or illegally shared through Facebook.

So the pressure has been mounting on moderation. And several systems to do just that have started to appear. From the dreaded ContentID on YouTube to features on Facebook and Twitch trying to do something about hate raids.

Unfortunately, it’s not working…

Moderation at scale

Big Tech is built on the idea of scale and more tech as a solution for everything. You create a business model that is as automated as possible, then scale the heck out of it. If you had two users on Facebook, you’d need a lot of development, maintenance, and hardware relative to the number of adds you can sell. The work for a thousand users isn’t much more than for two. With something nearly three billion users, Facebook still only has around 45,000 employees.

But that scale means that it is impossible to do even the simplest manual action for each user individually. Imagine doing one mouse click for each user. Even if all employees at Facebook chipped in, they’d still be doing over 60.000 clicks per employee. So, anything they do moderation-wise has to be largely automated. And that’s where the real fun starts.

You see, you can’t automate moderation.

Turing test and moderation

Let me share a theory with you:

An AI can only do proper content moderation if it passes the Turing test.

A small refresher on the Turing test: an interrogator questions an AI and a human, and if the interrogator cannot tell the difference, the AI passes the test.

For years, Facebook and Amazon have been tooting their own horn, about how with a flew likes or clicks by users, they know the users’ likes and dislikes better than those of their spouses. But that means this is also true for moderation bots. If I look at a bunch of images and how the AI has moderated them, I should know them better than their ‘spouse’, or rather, you can probably see if they are an AI or not.

Recognizing fake news, and hateful memes is hard. You need the proper context and a lot of knowledge about the subject. It’s very easy to fall for reasonable sounding doubt about vaccines, only to learn the person sowing that doubt is in fact making money from an alternative vaccine and faking research data (And yes, that’s about you, Wakefield).

Development of self-driving cars has really highlighted that AIs are good at pattern recognition, but suck at context and rational thought. They are easily confused by stickers put on road signs, and might suddenly conclude they are no longer driving in the right lane, or need to brake or speed up for the wrong reasons. And I believe content moderation is an order of magnitude harder than driving a car.

Where does that leave us?

Facebook, Twitter, and YouTube are all touting automated moderation solutions. And we can all see that they don’t work. They either fail miserably, or they are easily abused. And I think those companies all know it. My assertion that proper moderation requires passing the Turing test isn’t a complicated revelation. But actual moderation isn’t the point, I think.

We’ve already established that social media companies actually benefit from bad moderation. They are under pressure to do something, and they have done something. That it doesn’t work, and will never work, doesn’t really matter, because they only need to prevent legislation actually forcing the proper outcome. Which is why they actually don’t mind regulations that force them to take automated measures. Youtube is fine with ContentID: it keeps the competition out because it’s hard to develop, and they get to make a ton of money regardless of how well it works.

GDPR is a good example of how it should be done: regulations that make the desirable outcome the legal mandate. It’s not perfect, but GDPR forces companies to get proper consent before user privacy can be violated. And regulations like that made Facebook threaten to withdraw from Europe, so it must be working.

So that is what we need: legislation/regulations that make social media responsible for the content on their platform. If you make Facebook responsible for the hate speech on their platform, or you can sue them for illegal medical advice for Covid disinformation, or interfering in elections, then you’re going to get somewhere.

Yes, this is dangerous as well. It opens up ways for authoritarian regimes to censor these platforms. But, you know, they’ll do that anyway. China has already blocked Facebook and Twitter for a decade.

Of course, such rules will kill the current generation of social media. Facebook, Twitter, and Youtube cannot survive it, because they cannot scale their moderation. But that might pave the way for a new generation of social media that solve this problem differently. Or not, but a world without social media might not be so bad either.

Conclusion

Social media have a huge moderation problem, and their scalable AI moderation solutions will never work.

We need legislation and regulations to force them to fix this, even if that destroys them. Because it’s a war out there, and after a genocide in Myanmar, millions dead from Covid, and the US nearly sliding into fascism, maybe we need to start fighting back.

Martin Stellinga Written by:

I'm a science fiction and fantasy writer from the Netherlands