Robotics & Automation News

Market trends and business perspectives

Artificial intelligence versus natural stupidity: What will save us when the robots take over?

Neural nirvana. Or ignorance is bliss. An opinion piece.

Theoretically speaking, computers can be programmed to mimic everything that a human says or does – or thinks. That might not be the reality now, but most people would probably accept that it’s only a matter of time before we see a fully-functioning humanoid robot, cyborg or android that is indistinguishable from the average human.

But even when they are able to copy us in every way, walking like us, talking like us, and so on, can they, in theory, become conscious?

One problem with answering that question properly is that not everyone agrees on the definition of consciousness. 

For the purposes of this opinion piece, I will go with the dictionary definition rather than get into a philosophical debate about something most people intuitively understand in a commonsense way.

What is common sense?

Consciousness, according to the dictionary, is “the state of being conscious; awareness of one’s own existence, sensations, thoughts, surroundings, and so on”.

Another definition is “the thoughts and feelings, collectively, of an individual or of an aggregate of people”.

And a third version is “full activity of the mind and senses, as in waking life”.

By any of the above definitions, it could be argued that most animals are conscious. But let’s not make things too difficult and instead stick to computers and robots, which are more than convoluted enough.

The Turing Test 

The famous computer scientist from the 1940s, Alan Turing, who is regarded by some as the originator of artificial intelligence, devised what is now known as the “Turing Test”.

To pass this test, a computer or robot is required to interact with a human in such a way that the human cannot tell it apart from another human.

Chatbots, which mainly use text communication, and voice-only AI systems such as Siri, Cortana and so on, have come a long way towards being passable as human-like in their conversation and most have even found their own homes in today’s “smart speakers”.

But by most accounts, they are not thought to have passed the Turing Test.

The Uncanny Valley

Another concept is probably more appropriate here, and that is something called the “Uncanny Valley”, which refers to the point at which a computer or robot displays some human-like features but is, ultimately, recognizable as a machine.

And while Siri and its like might be impressive residents of the Uncanny Valley, the most creepy coterie has got to be the robots made by Boston Dynamics.

Perhaps they just appear more spooky because we can see them walking, running, opening doors and so on in uncannily realistic ways, while Siri et al are just disembodied voices.

But this is an area of technology that is progressing so fast that many are revising their forecasts as to when AI systems will not only leave their Uncanny Valley abodes but also drive past the Turing Test.

Some argue that it’s already been passed by some systems, but we are discussing consciousness, so let’s stick to that for now.

However, it would be interesting to make a couple of lists at some point in the future: one made up of conscious computers in science fiction – HAL9000, Skynet and so on; and another about their real-life equivalents, Siri, Alexa and so on.

That’s not to say that Siri and Alexa will take over our smart home and lock us out or anything like that. No. It just seems they are the closest thing we have to the AI in sci-fi.

The emergence of artificial consciousness

There’s a relatively new area of research called “artificial consciousness”, also known as “machine consciousness” or “synthetic consciousness”.

These terms tend to refer to both AI and robotics, or “cognitive robotics”, which just means a robot that learns.

This area of research is probably opening up because of the tremendous advances in computing – both in terms of storage capacity and processing capability.

In the past, these things would have been done on supercomputers or mainframes because they would be the only systems capable of the job.

Now, not only are supercomputers and mainframes more readily available, cloud computing also offers another option for the heavy-lifting part of the development work.

Additionally, both computing hardware and software is available that makes developing AI solutions relatively easy – for some people – than it would have been in the past.

But while AI methods such as machine learning and deep learning can give software and the specially-made hardware that it runs on the ability to learn from vast amounts of data it collects and then use what it has learned to behave in human-like ways, unfortunately, we return to that philosophical question: What is consciousness?

Back to the source

The reason we have returned is because it is indeed generally accepted that it is a matter of time before computers and robots pass the Turing Test, and a matter of time before materials science is able to produce synthetic flesh, bones and even organs which can form the hardware components of an android which cannot be differentiated from a human in any physical or behavioural way.

What is left, then, is philosophy or religion and questions about the mind, heart and soul and so on, none of which have yielded answers that everyone agrees on – even after thousands of years of arguments and wars.

Many people, or maybe most people, believe there is more to consciousness than simply being aware and being able to learn and adapt, and so on.

But by the dictionary definitions that we used above, we give it less than a decade before humans are interacting with human-like androids without realizing it. Maybe even less.

Unknown unknowns

Perhaps what most people mean when they wonder if robots will ever become conscious is whether these super-intelligent, super-strong robots will realize that they could relatively easily make like Skynet did in the original Terminator film and start killing us all off.

Certainly, it’s a theoretical possibility. Otherwise, why would there be so many films like Terminator? It seems that all these super-intelligent systems want to take over and kill us all – and there’s virtually no exception to that science-fiction rule.

But our saving grace may be that we ask questions. While robots and AI may be able to work out all the answers faster and better than us, it may never be able to ask the profound questions that we do.

This was something suggested by artist Pablo Picasso, who said: “Computers are useless. They can only give you answers.”

And while the ability to ask questions can also be programmed into an AI system, we actually do not know the extent of our own curiosity. We do not know what questions we will ask – or will need to ask – in the future.

Ignorance may or may not be bliss, but that very ignorance may be the quality – if it can be called that – which saves us from total annihilation by the arrogant, egotistical, self-important, self-aggrandizing, megalomaniacal robots of the future.

(Main picture: Still image from 2001: A Space Odyssey.)