Not to be all hipster but you’ve probably never heard of Elon Musk . I haven’t heard of him either. He’s a high-tech entrepreneur. So why am I talking about someone I’ve never met? Well, because recently, he’s said some interesting things about Artificial Intelligence. Here’s one of his comments ;

“If I were to guess at what our biggest existential threat is, it’s probably that… With artificial intelligence, we are summoning the demon. In all those stories with the guy with the pentagram and the holy water, and he’s sure he can control the demon. It doesn’t work out.”

He’s right though. Right?

Isaac Asimov. An important figure in Science Fiction.

Isaac Asimov. An important figure in Science Fiction.

Well, it’s kind of a difficult question to ask. Artificial Intelligence is an interesting one. It’s one that has plagued us since we collectively dreamt of robots with intelligence. That’s why Asimov came up with 3 laws of robotics. For those not knowing, they are as follows:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Whether this will be enough to protect us from our A.I. overlords is another matter – and one we won’t be able to accurately predict until we’re dealing with it. It also throws up another issue though – whether we will deem Artificial Intelligence as beneath and less valuable than our own.

Balthazar would have had a better time realising a fool-proof way of detecting Cylons had he know their spines glowed in certain circumstances...

Baltar would have had a better time with the Cylon detector had he know their spines glowed in certain circumstances…

Fiction or Fact?

We’ve seen a great many examples of A.I.’s gone bad. Skynet from Terminator is a global network or a single cloud-based intelligence decimating the human world. The Matrix is another example of a cloud-intelligence – one where instead of killing us, they use us to power their own lives. Even in Prometheus, David prefers the company of aliens to humans (and rightly so, looking at how he’s been treated).

But for every nightmare future we envision for A.I.’s, there’s an equal number of ones that either co-operate or are even the victims of us. Blade Runner saw Decker executing androids that would not self terminate and I, Robot saw Will Smith befriending (reluctantly) a robot to go against an A.I. Heck, the lines blurred in Battlestar Galactica to the extent where there was virtually no lines – hence one of the plot lines where Baltar was tasked with creating a Cylon detector.

So, which version is true?

Before making a decision between whether A.I. will be a threat, we must first examine ourselves. Most notably our treatment of others. Now, I love Star Trek and I think it’s got a brilliant message about co-operation and exploration of “the other”, but it says more about where we want to be rather than where we are.

Where we are is scared. If an alien landed on Earth today, there’s many people that would freak out and even turn violent. We don’t know what they want, we don’t know if we can trust them, we don’t know of what they’re capable of, we know nothing. Heck, if we can’t even get along as a species, what chance does an alien have?

The real crux of the issue, is whether we can view someone we know nothing about as a blank slate. We’ll often see our own nature reflected in them and assume it’s theirs too. We, throughout our history have always have trouble distinguishing between our own behaviour and what we would consider natural behaviour.

"I am Locutus of Borg. Resistance is futile. Your life as it has been is over. From this time forward you will service us."

“I am Locutus of Borg. Resistance is futile. Your life as it has been is over. From this time forward you will service us.”

But what does this have to do with A.I.?

The ironic metaphor of aliens for AI’s is legitimate in so far as we don’t know much about them. Will an A.I. turn against humanity and try to squash us like a bug? Will it help us improve? Will we listen? Will they see themselves as superior and try to adapt us to them like the Borg or Cybermen?

To be fair, I’m not claiming to know which it will be, but I do think our approach to the topic needs to be adjusted.

Let’s play a game…

If nobody knows about basic game theory, it kind of goes something like this: two parties unable to communicate directly have two options each – they can either be friendly or hostile. Either way it’s a gamble – with 1 of 4 outcomes (but not necessarily equal).

  1. You could both be friendly and co-operate for mutual gain.

  2. You could be deceptively hostile and take advantage of them, increasing your gain.

  3. They could be deceptively hostile to you and you could be greatly disadvantaged.

  4. You could be hostile to each other and potentially nobody wins.

What you would do in that situation relies on several factors, including how much you have to lose, how much you have to gain, whether the gain from co-operation is greater than a gain from exploitation, how much you can trust the other party to be honest and transparent, etc… So, what would you do?

The trekkie in me would say co-operate. I’d have to agree and even argue that our judicial system is based upon “innocent until proven guilty”. Why assume something is deceptively hostile instead of genuinely friendly until you see something that proves one way or the other?

Johnny 5 - Apparently, he's alive!

Johnny 5 – Apparently, he’s alive!

But this is getting way too philosophical

Overall, I think we should always look before we leap – which is why some of these questions are asked so far in advance. Treating another intelligence like a sentient being with rights is probably the best way to go about things. The threat with Skynet in Terminator was based on the idea that we panicked and tried to shut down the machines – the intelligence. We tried to kill it and it retaliated like any human would. With The Matrix (well, the Animatrix, as was explained), we tried to deny rights to machines – once again to intelligence, and they retaliated. Heck, even Short Circuit, that 80’s film about the machine struck by lightning and magically became self-aware, people were trying to shut it down and dissect it – something we wouldn’t do to ourselves. Each example of “robots gone wrong” shows that naturally, we (whether it be humans, or just organic life) are to blame.

So what approach should we take?

I’m not claiming to be an expert. I’m just a geek that loves his Sci-Fi. I’m someone that can see the error of someone else’s ways and try not to make the same mistakes. That’s why I would argue that if we develop Artificial Intelligence (and I mean ‘if’ because even now we struggle to define intelligence), we should treat it like we would want to be treated. We should treat them like they’re family – give them a purpose, give them a place in our world and give them the freedom to change themselves. If we bring up children to hold our own values dear, shouldn’t we also bring up A.I.’s to do the same?