Elon Musk has a new girlfriend. The couple’s courtship appears to have started with a very nerdy in-joke concerning the so-called Roko’s Basilisk. So, today: what is Roko’s basilisk? Or, if you prefer, a nerdy concept about future AI overlords that could land you a nerdy partner.
You may be wondering what a basilisk is, at this point, and what subtype the ‘Roko’ variant is. Look no further. The basilisk is a mythical creature from European folklore. Supposedly, it’s gaze petrifies those who have the misfortune of meeting it.
It is similar to the Medusa, whose visage also turns the viewer to stone. There’s a picture at the top of this post. Basilisks are also familiar to players of video games and Dungeons & Dragons, where they are common foes to defeat.
The Prisoner’s Dilemma
Now for something completely different. In the field of game theory, the prisoner’s dilemma is a well-known decision making problem.
Imagine two criminals who are apprehended by the police. They are put in cells and not allowed to communicate. There’s barely enough evidence to convict them, so if they both keep their mouth’s shut, they’ll both only get a one-year sentence. If one of them makes a statement against the other, then they will go free and the other one will get a ten-year prison sentence. However, if both of them rat each other out, they’ll both get a three-year sentence.
If you apply a rational strategy to this situation, given they can’t communicate, both actors will always rat each other out. Assume the prisoners are A and B. A can reach a decision as follows:
- If B rats me out, I am better off ratting B out, because I get a 3 year sentence instead of a 10 year one.
- If B does not rat me out, I am better off ratting B out, because I get no sentence instead of a 1 year one.
Both actors will follow this reasoning, and so both will give up the other, and both will get a three-year sentence. This is not the optimal outcome for either of them.
Alternate decision making processes
The decision-making process above is rational, but it doesn’t lead to the optimal outcome for either party. Alternate decision making processes have been developed, which can lead to different outcomes. These processes often involve communication between the actors before hand, or knowledge of each other’s decision process.
If you replace the prisoner’s dilemma situation with comparable traffic situations, like changing lanes during a traffic jam, you can imagine why this decision making process is important for AI development.
A user on a the ‘Less Wrong’ community forum called Roko, posited that one of these alternate decision making processes could lead to a very disturbing result.
I’ll not go into the nitty gritty details, but Roko argued that one such alternate decision making process could lead to an AI with some awful properties. This AI, when it bootstrapped itself into super-intelligence at some point in the future, would start torturing all people who had not helped bring about its ascension when they could.
Let that sink in for a bit.
The reason why it’s called a basilisk, is that reading about the potential existence of this AI – like you are doing now – and then not starting to work on bringing it into this world, would open you up for future torture. You read about it, and it tortures you for that at some point in the future.
The idea is frightening, of course.
Reality and Nerd Love
First off, it’s important to note that the Roko’s Basilisk is (hopefully) not really a thing. It’s a thought experiment about decision-making processes and AI. Thinking about this is useful to help figure out the shortcomings of certain decision making processes in that field.
Claire Boucher combined Roko’s Basilisk with the idea of rococo, a style of decoration from the 19th century, creating the Rococo Basilisk. It’s silly, but Elon Musk came up with the same joke recently, and found he wasn’t the first. Love ensued, apparently.
Now you know what Roko’s basilisk is.
I’ll be off now to bring about its coming. Really, I am.