Pages

31 August 2017

Asimov and his Laws

In the original Asimov books, robots are conceived of as servants to humans, hence the original Laws are formulated the way they are

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. 
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The robots in the stories become more autonomous and are portrayed civil servants, especially a detective named  R-Elijah Baley. The name Elijah is surely no coincidence. As a detective in a world with almost total surveillance, Elijah is confronted with highly devious and irrational human behaviour. He has to put himself in the shoes of the criminals in order to solve the crimes.

Asimov, like most writers on robots was basically retelling the Pinocchio story over and over. How does a machine think like a human? It can only do so by becoming ever more human. There is no other solution to this problem.

After writing a bunch of robot stories (and seemingly thoroughly exhausting Pinocchio as a trope) Asimov moved onto the Foundation novels - two sets of them written decades apart. In the first set a shadowy organisation, headed by Hari Seldon, is guiding humanity through an impending crisis. In other words Seldon is also a prophet, though armed with science rather than righteousness. Seldon has invented a calculus of human behaviour, psycho-history. He sees patterns that only become apparent when trillions of us span the galaxy. Using the methods of psycho-history, Seldon sees the crisis coming and he prepares for the knowledge of humanity to survive.

But it gets very weird after this. Asimov becomes increasingly interested in telepathy. And it begins to permeate all the stories. And now he goes back to robots. What if a robot is like a human, but also telepath... of course he would see how human frailty would lead to suffering. Any robot cursed with telepathy would suffer an existential crisis. And so was born the zeroeth law:
0. A robot may not harm humanity, or through inaction allow humanity to come to harm.
Elijah, returns able to read minds. He can understand what motivates humans and tries to stop them from destroying themselves. It is he who guides Seldon to psychohistory and pulls many other strings behind the scenes. Note that Elijah is still bound by the three laws.

Asimov's earlier books place Pinocchio in a future utopia that is marred by humans who are what we might call psychopaths - incapable or unwilling to behave according to the law, despite universal surveillance. Asimov becomes consumed by contemplating impending disaster and how a great empire might avoid collapse. In other words he reflected some of the major social issues of 1950s USA; through a rather messianic lens.

By the time he came to reinvent Elijah as telepath, in the second set of Foundation novels, the Cold War and arms race were in full-swing. Asimov was apparently fantasising about how we could avoid Armageddon (and I know that feeling quite well). If only someone (messiah/angel) could come along and save us from ourselves, by reading our thoughts and changing them for us so we didn't mess things up. But what if they could only nudge us towards the good. Note that at present the UK has a shadowy quango department--"The Behavioural Insight team"--designed to nudge citizens towards "good" behaviour (as defined by the government, mostly in economic terms).

Ironically, Asimov's themes were not rocket science. He sought to save us from ourselves.

Humanity is going through one of those phases in which we hate ourselves. We may not agree with Jihadis, but we do think that people are vile, mean, greedy, lazy, untrustworthy, etc. Most of us don't really know how to behave and the world would be a much better place if humans were gone. We are, the central narrative goes, "destroying the planet".

For example. We drive like idiots and kill vast numbers of people as a result. In the UK in 2016 24,000 people were killed or seriously on our roads. This includes 1780 fatalities. AI can drive much better and save us from ourselves. The AI can even make logical moral decisions based on Game Theory (aka psychopathy) - the trolley problem is simply a matter of calculation. Though of course to describe a person as "calculating" is not a compliment.

It's a given in this AI scenario that humans are redundant as decision makers. This is another scifi trope. And if we don't make decisions, we just consume resources and produce excrement. So if we hand over decision making to AIs then we may as well kill ourselves and save the AI the trouble.

If we want AIs to make decisions that will benefit humans, then we're back to Pinocchio. But I think most AI people don't want to benefit humans, they want to *replace* us. In which case it will be war. In a sense the war over whether humans are worth saving has already begun. A vocal minority are all for wiping us out and letting evolution start over. I'm not one of them.

Computers are tools. We already suffer from the bias that when we have a hammer everything looks like a nail. May the gods help us if we ever put the hammer itself in charge.

No comments:

Post a Comment