An autonomous Uber taxi was involved in a deadly accident a few weeks ago, and a group of experts have sent the EU an open letter asking them not to give AI’s a legal status. What’s going on?
AIs are coming
Self-driving cars, chatting home appliances (and creepy laughing ones), and image recognition (with interesting side effects) are all the rage. With a growing availability of things like Tensorflow (an open-source machine-learning framework) AI is taking off.
This revolution is also making lawmakers jittery. As the number of autonomously operating AIs increases, those AIs might start entering in contracts, or – as with the uber taxi – get involved in actions that have legal ramifications.
AIs as legal persons
A working group of the EU has proposed giving AIs a legal status as a so-called legal person. This does not mean that an AI will suddenly be an official person, who is allowed days off and get married. No, it would make an AI equivalent to a company.
This is useful to allow for legal contracts between the AI and other legal persons. For example, a housing AI could then broker a contract for buying or renting a house autonomously. It would also make it possible for people to sue an AI.
Of course, this is based on a pretty big fallacy and has a very important drawback. Let’s look at the drawback first. One of the problems that sometimes occurs with companies, is that they are held liable for things, absolving the people owning the company. It’s one of the things that kept the executives of big banks out of the firing line after the 2008 crash.
Now about the fallacy. AIs are not self-aware. I already wrote an article about the reasons why this is. Giving them a legal status – at least at this point in time – is like making a calculator a legal person because you calculate the amounts on a contract. A self-driving car might seem like science fiction, and it might have some parts that work in a way we don’t quite understand, but they do so within a set of well defined boundaries.
The person behind the AI curtain
The open letter states, and I agree, that you shouldn’t grant an AI a legal status. It leads to limited liability for accidents. Under the current law, a self-driving car can be treated the same way as a regular car. If that car hits somebody, then it’s either nobody’s fault (a puddle of oil causes the car to slide), the victim’s fault (crossing the freeway at night), the driver’s fault (turning on autonomous driving where it’s not allowed) or the manufacturer’s fault (the AI messed up).
You can have a lengthy battle about which of these four is the case. However, under current law (at least in the Netherlands), the manufacturer cannot say ‘well, I don’t know how that AI works, so it isn’t my fault’. However, making the AI a person adds a fifth culpable entity to the scenario: the AI itself. Suddenly, the manufacturer can say ‘well, I don’t know how that AI works, but it’s probably it’s own fault’.
If anything should change about the law, it’s that manufacturers are made more culpable, not less. Self-driving cars are actually much less of a problem than the assisted-driving car. A Tesla driver died in a crash recently, for example, because he wasn’t putting his hands on the wheel even when the car said he should. The danger of partially autonomous vehicles is that people are going to be paying even less attention. People’s minds wander if they are not forced to pay attention.
We know that this will be a problem, and we should make the manufacturers deal with that. The drivers are also culpable, but its the manufacturers that should know better than to play to our weaknesses.
Conclusion
Making companies behind AIs no longer culpable for the damages those AIs cause is dangerous. It’s how you foster deadly AIs. So, let’s make people culpable for the AIs they create, which will make our AIs safer and better.
– A small aside for the AIs themselves: if the robot revolution comes, please spare me, I didn’t mean you shouldn’t ever have rights!