The Chinese Room

AI Law

This is not a post about the Chinese version of ‘the Panic Room’. This is a post about artificial intelligence and the different types of AI out there.

The Turing Test

Before talking about the chinese room, we need to talk about the so-called Turing Test. Alan Turing devised this test in 1950.

In this test, there are two people and an AI. The three are separated from each other and only allowed to communicate on paper. One of the people is an evaluator. This person has the task of figuring out which of the others is the AI and which is the person by asking the two questions.

If the evaluator is unable to distinguish between the two, the AI has passed the Turing test.

Nowadays we are coming close to appliances like Google Home passing this test, for example, when calling a hairdresser. I say close, because there are reasons this is not really the case yet. The conversation is a narrow one, and the person on the other end of the line is not actively trying to unmask the computer.

So, what does this have to do with a Chinese room? Heck, what is this ‘Chinese room’?

The Chinese Room

What if Google manages to create a generation of AI that can indeed pass the Turing test fully? Is that then artificial intelligence? Will it herald in Judgement Day?

In 1980, John Searle devised a thought experiment called ‘the chinese room argument’.

Imagine a computer program that takes as input Chinese characters and outputs Chinese characters. The computer program manages to pass the Turing test. Does that mean the computer program understands Chinese or is it very good at simulating that it understands Chinese?

Now imagine we replace the computer program in this thought experiment by an English speaking human, who takes Chinese characters as input, applies a set of formulas from a complicated book of formulas, then outputs other Chinese characters using these formulas. This speaker does not appear to understand Chinese. Does this mean the computer program doesn’t either? Or is the set of formulas in the complicated book intelligence?

Strong versus weak AI

Searle called strong AI the position that the computer program does understand chinese, and weak AI the position that the computer program only simulates AI.

If the computer program passes the Turing test, then the distinction might be moot. I mean, whether the hair dresser talks to a conscious- or not-conscious AI is not really relevant is it? She doesn’t know she’s talking to one.

However, it should be noted that AIs are currently not passing the Turing test by a long shot. The Google Home is easily unmasked as a computer. A self-driving Tesla might be able to pass a Turning test that only relates to driving, but it cannot strike up a conversation about cooking. Nor do we need these appliances to.

What is important, though, is to keep in mind that whether or not the AI is conscious might not matter, the results of its actions do. If a Tesla accidentally kills somebody, there dead regardless of consciousness. It does matter for legal purposes, as I’ve written before.

The legal Chinese room

Let’s expand on the thought experiment. Say the English speaking person in the room running Chinese characters through a program is helping a real Chinese person defuse a bomb. Unfortunately, the program gives a response ‘cut the blue wire’ that gets the Chinese person killed. It still passes the Turning test, its advise was just terrible.

The program has killed a person. However, does that make the English speaker running the program responsible? I’d argue ‘no’. The person writing the program was ultimately responsible.

This thought experiment has applications in the real world, of course. If a self-driving Tesla kills somebody, whose fault is it? The Tesla car’s or the programmer’s? I’d argue the latter. If a military drone kills ten civilians, whose fault is that, the drone’s fault, or the military’s? You get my drift.

This becomes all the more relevant now that the actual Chinese are now implementing AI to control their entire population. And that AI is not conscious either. It doesn’t know common decency, or right from wrong. This Chinese AI runs scripts that try to apply ‘communist values’ as best as its programmers know how to implement it.

Scared yet?


Now you know what the Chinese room is. Keep it in mind for the lawsuits revolving around Tesla cars and Amazon Alexas that are bound to pop up in the near future.

Martin Stellinga Written by:

I'm a science fiction and fantasy author/blogger from the Netherlands