AI and the singularity


What do Stephen Hawking, Elon Musk, and Bill Gates have in common? They’re all afraid of the dangers of AI. Talking about the dangers of AI leads quickly to the technological singularity. And that leads to an explanation of what the singularity is.

The technological singularity

Artifical Intelligence is developing rapidly. We’ve gone from room-sized computers that could hardly play Tic-tac-toe to telephones that can recognize voice commands. Self-driving cars are around the corner, and 1984 barbie is on her way to bring AI to your children. That all sounded like science fiction only a little while ago. Heck, it sounds like science fiction now.

If you go one step further, you could create an AI that can rewrite itself to become smarter. If we release such an AI onto the internet, it could bootstrap its intelligence exponentially until it became an AI god. Or Skynet, if you’re more into the dystopian school of thought.

This exponential growth of artificial intelligence (and technology in general) will go non-linear, spiking into an asymptote, in other words a technological singularity. In less math-speak, technology changes so fast that it becomes unfathomable for human minds what is happening.

Origins of the singularity

The technological singularity was first thought up by one of the fathers of modern computing and the atom bomb, John von Neumann. He foresaw a point in the future of human history, where technological change would accelerate to a singularity, a pivotal point in human history, after which human affairs could not continue.

Many science fiction writers have since explored the singularity and its possible consequences, such as Vernor Vinge and Charless Stross. When will it happen? How will it happen? What will happen after, even if we cannot understand it?

Go read some of their novels to find out about their thoughts on the subject.

Why it is unlikely

Having explained the singularity, time for a reality check. Is it really going to happen that way? Personally, I doubt it.

I heard Charless Stross speak last year, and he has also come to doubt the likeliness of the singularity. The reason is simple: nothing grows exponentially into an asymptote. Moore’s law, which states that the number of integrated circuits on a chip doubles every two years has come to an end after fifty years. The exponential growth in the computer industry is tapering off, and the related boom in smart phones also seems to be going at a slower pace. That is not to say that there is no technological advance, but it is not non-linear.

The singularity relies upon exponential growth. Whatever is going to happen in the coming decades, it will most likely not be a complete end to what it is to be human. We ape-descended mammals will still be around in a hundred years; given we don’t kill ourselves, of course. That does not mean that AI is not happening, which has its own dangers.

The dangers of AI

We don’t have to worry about AI going non-linear. Skynet is not that likely, really. To really understand this, you have to look at intelligence more closely. We always pretend there is a simple scale from dumb-to-smart: the IQ scale. That scale, however, is mostly bullshit. A computer can solve an IQ test with a very high score, if you code it that way. It doesn’t make it aware.

Voice control on a phone is nice, but it doesn’t mean that the phone is now contextually aware of what it is doing. If you say ‘call John Doe’ to your phone, your phone will activate the dial function and dial his number. However, if you were to ask it ‘what is a call?’ it could do nothing. A phone does not understand the concept of a call, or even what it is doing. Voice control just links a combination of sounds you make to a certain command.

Real awareness would require the full scope of human context to be implemented in the AI. It takes a human mind years to go from being born to being able to actually reason. We are nowhere near replicating this behaviour and  contrary to what science fiction shows and novels want us to believe, it cannot spontaneously come to be.

That’s anthropomorphising the AI. The ‘oh no, the AI has gone beyond its programming parameters and is now trying to kill humanity because it thinks it inferior’ plot is utterly impossible unless the AI magically developed a way to reason about abstract concepts like ‘inferior’ and ‘humanity’.

What can happen, and what the likes of Stephen Hawking, Elon Musk, and Bill Gates are referring to, is that we use AI unwisely. If you hook up a nuclear missile to an AI, and the AI’s programming concludes it should fire the missile at a city, then a lot of people will die. Not because of evil intent of the AI, but because it was programmed wrong. Just like your phone can sometimes call the wrong person if you don’t speak clearly.

AIs are not dangerous of themselves, they are dangerous if we take out the human decision component in a life-or-death situation without a lot of safeguards. That’s why the big fear we should have is fleets of AI drones that kill people because the AI was programmed to do that and no human is taking responsibility any more. Collateral damage with nobody at the trigger.


Don’t wait for the singularity, don’t fear the evil AI.

Instead, fear the people who leave life-and-death decisions to computer software.

Martin Stellinga Written by:

I'm a science fiction and fantasy author/blogger from the Netherlands