Within every intelligent machine, such as an industrial robotic arm, there is, of course, software.
The software is what enables it to operate according to its requirements.
And as the machines get more sophisticated, so does the software.
It’s possible to underestimate the difficulty of computer programming if, like us, you have no idea about coding.
But as software engineer at Hannover explained, if a robot is programmed to find an object at location A, it can get totally bamboozled if the object is one or two millimetres out from where it is supposed to be.
This, of course, is an overly simplistic example, but there are many lines of code that go into fixing that simple problem alone.
The software expert said “a robot has zero ability to improvise unless it is written into the code”. She added that certain new technologies — machine learning, for example — could provide a different, more powerful way to solve this problem.
However, if it’s anything like the “reveries” the robots in Westworld have, maybe we need to think about exactly how much power we want to give these robots.
One of the most complex challenges — and there are many software challenges in a variety of different fields — is teaching a robot how to recognise objects and then pick them up appropriately.
Such a task might be easy if all the objects are the same or the robot has to pick one single object each time, but not so easy when the robot is faced with many different objects from which it is required to pick several different objects.
Think of it like shopping. When a human visits a supermarket, they wander along the aisles and view many hundreds of objects and pick out maybe 20 or 30 items to put in their basket and take to the counter.
Similar activity takes place in warehouses evey day, but robots are not capable of doing that job anywhere near as effectively as a human.
The hardware isn’t the main problem although there are advances to be made in gripping technology. The main problem at the moment is the software.
A robot can look at products through its cameras and search its cloud-hosted database to identify those products, which is difficult by itself. But then it has to know how to pick them up — a bag of potato chips needs to be handled differently to a bottle or a vegetable.
The software expert who spoke to Robotics and Automation News said it could be that machine learning will have a role to play, although she added that it would take time to train the models.
She said that the crucial thing that makes artificial intelligence work is the aspect of learning — all AI models need to learn from experience, from seeing objects and picking them up and so on.
It does not necessarily need a pre-made database — it essentially builds one of its own.
And, of course, this learning and then the application of the learning can be useful in a huge range of tasks.
Machine learning is a branch of AI, and so is deep learning. All of these new programming paradigms offer much for the future, and many companies are looking into them.
But it will take time for the first true AI programs to start having an effect on robotics and automation.
When they are ready, however, the manufacturing and logistics sectors will probably see some fundamental changes.