Pepper, humanoid, softbank, foxconn

Cloud robotics: Talking cloud and saying nothing

Cloud robotics are enabling robots to access large amounts of computing power that their bodies do not have the physical space to accommodate. Hundreds if not tens of thousands of servers are potentially at the service of small robots which can be in remote locations well away from the nearest supercomputer or data centre, only being connected by, for example, Wi-Fi or Ethernet.

This allows robots to call on powerful, cloud-based applications, such as speech recognition and language, when they are interacting with their users.

At the moment, most cloud robotics systems are linked to specific robots. So, for example, SoftBank’s Pepper robot is linked to the cloud robotics artificial intelligence system developed by Cocoro, another SoftBank company.

Pepper has about 25 onboard sensors to collect a wide range of information – sight, sound, touch and movement. That covers three of the five senses that human beings generally use, the two missing are taste and smell.

Pepper can be connected to the internet through Wi-Fi or Ethernet, both of which are incorporated into the robot.

The sensors collect information which travels through its connections to the cloud, where the data is processed, and answers on how to respond are given.

How much of the decision-making is done by Pepper itself and how much is done back at the data centre is an interesting question. Suffice to say that cloud-based artificial intelligence plays an important role in Pepper’s operation.

It’s possible that only a fraction of the processing is done by Pepper itself. Most of it is probably done in the cloud, where Pepper’s applications can interrogate the databases and formulate answers that the application calculates are the most appropriate before suggesting a response to the end user. That is certainly the case with its speech recognition system.

Pepper, humanoid, softbank, foxconn
Pepper, the humanoid robot launched by SoftBank and Foxconn

Pepper could be seen as a messenger for the cloud computing system that it is connected to, an interface for the internet, although its makers would probably argue that the robot has plenty of onboard memory and processing power.

More is known about Watson, the artificially intelligent computer system built by IBM about 10 years ago. Watson is essentially a cluster of approximately 100 IBM computers, which collect information from a variety of sources on the Internet, including from databases such as DBpedia, WordNet and Yago.

Watson was not connected to the Internet when it appeared on the US game show Jeopardy!, where it scored higher than its human competitors. However, it had already downloaded the entire text content of Wikipedia before taking part, and had stored 200 million pages of data on 4 terabytes of space, arguably becoming a mini-cloud unto itself.

IBM says Watson uses cognitive computing systems to understand natural language and is not programmed – it learns. IBM has programmed Watson to perform the following steps in its operation:

  • observe;
  • interpret;
  • evaluate; and
  • decide.

One of the more impressive features of Watson, according to IBM, is its ability to “spot patterns that humans had not known existed”.

Unlike Watson, Pepper does not seem to offer a developer programme which allows access to the powerful AI applications the robot uses. The closest thing to it is the developer programme being offered by Aldebaran, which created Pepper in the first place. Aldebaran also develops the Nao humanoid robot, which is what its developer programme is intended for.

Aldebaran’s software development kit (SDK) for Nao is compatible with a number of languages and robotics platforms, including C++, Python, Java, JavaScript, and others.

IBM’s Watson developer programme can be accessed through its Bluemix cloud platform as a service, and supports a range of programming languages including Java, Node.js, Python, and Ruby on Rails. Bluemix is based on Cloud Foundry and SoftLayer technology and infrastructure.

What’s in a name?

Watson is, of course, not the only artificial intelligence cloud service available on the Internet today. Nor indeed is it the only one with a human name.

Lucy is a cloud-based AI service from LetsMakeRobots.com, a free and volunteer-based initiative. That, too, is built around a specific robot, and is compatible with languages such as Python.

If custom AI is required, then companies like TinMan Systems can provide made-to-order artificial intelligence systems. The company also offers a PC-based integrated development environment (IDE), AI Builder, as well as a web-based platform.

Another company that creates custom AI solutions is LNL, which also offers AI as a cloud-based software as a service (SaaS) for robotics as well as a number of other areas, such as games and pattern recognition.

One cloud AI service that has received some press is Ersatz Labs, a “deep learning in the cloud” platform which provides software engineers a single web interface through which to upload data, train, test and apply models. It also offers an API which accomplishes similar things programmatically, as well as a physical piece of kit, the Ersatz Deep Learning Appliance, for those who want to keep their data away from the cloud.

They may be new and nimble, but small and medium-sized cloud AI businesses may have difficulty establishing themselves against the giant tech companies, a large number of which have launched, or are planning to launch, cloud AI services.

ibm, watson, jeopardy
IBM’s Watson on Jeopardy!, where it beat the human contestants

While IBM is currently offering the Watson AI through the cloud, what it could offer in the not-too-distant future is AI services running on computers built using a version of its TrueNorth “brain chip”.

The latest iteration of the “cognitive computing” chip was unveiled last year. IBM says the chip has 1 million programmable neurons, 256 million programmable synapses, and 4,096 neurosynaptic cores.

The average desktop computer has a processor with two cores.

IBM says its long-term goal is to build a chip system with 10 billion neurons and 100 trillion synapses. The company is currently building an end-to-end ecosystem for developing applications on the chips that includes a simulator, a programming language, sample algorithms and applications, and a library.

While IBM develops the brain chip, the Brain Corporation offers cloud services for consumer robotics, built around its eyeRover two-wheeled small robot. The service is still in beta mode, so not much is known about it.

But what the company does say, unsurprisingly, is that it applies neuroscience to robotics. “We are intrigued by the secrets of the brain and we use simulations to understand neural computations,” says the company.

Another company that is preparing to launch cloud robotics services is Neurala. With its “brains for bots” mantra, the company says its bio-inspired approach differs from traditional approaches by using deep learning to build robot brains that continuously observe and adapt to their environment, much like humans do.

Wolfram, whose online mathematics engine has helped many a student, is also developing AI-oriented services, integrated with its Wolfram Cloud. One of its interests is image identification.

Stephen Wolfram, founder of Wolfram, says: “With ImageIdentify built right into the Wolfram Language, it’s easy to create APIs, or apps, that use it. And with the Wolfram Cloud, it’s also easy to create websites—like the Wolfram Language Image Identification Project.”

HP claims to sell more server computers than any other company in the world. The company has a data analytics platform – Idol – which could be considered an artificial intelligence cloud of sorts, although it isn’t marketed as such.

And there’s Microsoft Azure, which offers sample data sets in its Machine Learning Studio. A number of APIs are available, including facial recognition, text analytics and computer vision.

Amazon, which recently overtook Walmart as the largest retailer in the US and probably the world, has also been dominant in the cloud services market for a number of years.

The company recently launched its Amazon Machine Learning service, a cloud-based AI technology which enables customers to analyse large sets of data, identify patterns and make predictions.

Jeff Barr, chief evangelist, Amazon Web Services, wrote on the company’s blog: “You can build and fine-tune predictive models using large amounts of data, and then use Amazon Machine Learning to make predictions (in batch mode or in real-time) at scale.

“You can benefit from machine learning even if you don’t have an advanced degree in statistics or the desire to setup, run, and maintain your own processing and storage infrastructure.”

[visualizer id=”979"]

Source: Synergy Research Group

Amazon, which more or less invented the cloud concept, is estimated to have 28 per cent of the market, with Microsoft second on 10 per cent. The figures come from Synergy Research Group, which places IBM third, and Google fourth.

Google, which has been offering cloud services for around a decade, has a number of tools that provide developers access to the search giant’s machine learning software, such as the Google Prediction API.

Furthermore, Google looks to be developing a new, AI-specific cloud service. The company bought a startup company called DeepMind last year, and Google founder Larry Page says he is “really excited” about the company and its technology.

“What’s really amazing about Deep Mind is that it can actually – they’re learning things in this unsupervised way,” said Page in an interview broadcast on the TED website. “They started with video games, and really just … playing video games, and learning how to do that automatically.”

He added that the AI system had not only learned to play games, but it had also learned the concept of cats and humans from watching videos on YouTube, to the extent that it can draw a reasonably accurate “sketch” of a cat or human, from its memory.

DeepMind’s stated objective is to “solve intelligence”. The company says it uses “the best techniques from machine learning systems and neuroscience to build powerful general-purpose learning algorithms”.

In an article in Nature magazine, Demis Hassabis, one of the three founders of DeepMind, says: “To advance AI, we need to better understand the brain’s workings at the algorithmic level — the representations and processes that the brain uses to portray the world around us.

“For example, if we knew how conceptual knowledge was formed from perceptual inputs, it would crucially allow for the meaning of symbols in an artificial language system to be grounded in sensory ‘reality’.”

Even if companies like DeepMind were able to “distil intelligence into an algorithmic construct”, as Hassabis puts it, the sheer scale of memory and computation required to mimic human processing power would still require vast numbers of computers.

Dharmendra Modha, IBM Fellow
Dharmendra Modha, IBM Fellow

Dharmendra Modha, IBM Fellow and one of the researchers into the brain chip, says: “To underscore this divergence between the brain and today’s computers, note that a ‘human-scale’ simulation with 100 trillion synapses required 96 Blue Gene/Q racks of the Lawrence Livermore National Lab Sequoia supercomputer.”

IBM claims Blue Gene/Q is the fastest analytics computer system on the planet, and has a peak performance up to 100 petaflops – a petaflop being a unit of computing speed equal to one thousand million floating-point operations per second.

The human brain may be the undisputed champion of intelligent systems, but it didn’t get where it is today by resting on its laurels.

Having made huge evolutionary leaps to get from sponges to humans, via monkeys and the missing link, it now wants to make the transcendental leap to the Internet. The origins of the human brain may be a scientific mystery, but the next quantum leap in its evolutionary journey seems clear. Not content with the physical constraints of the human skull, the brain wants to connect to the cloud.

At least that’s the view of Ray Kurzweil, a computer scientist and director of engineering at Google.

Speaking on TED last year, Kurzweil said the human brain, specifically the neocortex, will in future be augmented using a hybrid combination of biological and non-biological thinking – meaning, cloud-connected human brains.

Kurzweil said: “Twenty years from now, we’ll have nanobots, because another exponential trend is the shrinking of technology. They’ll go into our brain through the capillaries and basically connect our neocortex to a synthetic neocortex in the cloud providing an extension of our neocortex.

“Now today, I mean, you have a computer in your phone, but if you need 10,000 computers for a few seconds to do a complex search, you can access that for a second or two in the cloud.

“In the 2030s, if you need some extra neocortex, you’ll be able to connect to that in the cloud directly from your brain. So I’m walking along and I say, ‘Oh, there’s Chris Anderson. He’s coming my way. I’d better think of something clever to say. I’ve got three seconds. My 300 million modules in my neocortex isn’t going to cut it. I need a billion more’.

“I’ll be able to access that in the cloud.

“And our thinking, then, will be a hybrid of biological and non-biological thinking, but the non-biological portion is subject to my law of accelerating returns. It will grow exponentially.

“And remember what happened the last time we expanded our neocortex. That was two million years ago when we became humanoids and developed these large foreheads. Other primates have a slanted brow. They don’t have the frontal cortex.

“But the frontal cortex is not really qualitatively different. It’s a quantitative expansion of neocortex, but that additional quantity of thinking was the enabling factor for us to take a qualitative leap and invent language and art and science and technology and TED conferences. No other species has done that.”