Robotics & Automation News

Market trends and business perspectives

book robot rules

New book: ‘Robot Rules’ – Beyond Asimov’s ‘Three Laws of Robotics’

A new book is looking beyond Asimov’s so-called “Three Laws of Robotics” to practical, real-world problems currently being discussed because of increasingly autonomous machines.

For millennia, the laws governing society have had one subject: humans. The new book Robot Rules explores how we might rewrite laws for artificial intelligence.

Who or what should be liable if an intelligent machine harms a person or property? Is it ever wrong to damage or destroy a robot? Can AI be made to follow moral rules? How should AI even be treated – as an object, a subject, or a person? 

These are just some of the questions raised by Jacob Turner in Robot Rules. Answers to these questions will likely influence how the technology develops over the next 10 to 20 years.

Although we may imagine horror scenarios of AI destroying humanity, Turner says the actual challenge is to consider how humanity should live alongside it.

Paradoxically, the popular media’s focus on the existential threat posed by AI masks, rather than prepares us for, the extraordinary changes which will result.

As Lord Neuberger, President of the UK Supreme Court 2012-2017, says in his Foreword to Robot Rules: “The potential effects are so far-reaching to all aspects of our physical, mental, social and moral lives that most people find them too challenging to think about in any constructive or practical way.”

As a legal phenomenon, AI is unique. If AI causes harm or creates something beneficial, we need to establish who or what is responsible.

For example, we need to consider whether controls on the human creators are needed, or whether these controls can also be built into or taught to AI itself.

This book provides a roadmap for a new set of regulations. It asks not just what the rules should be, but who should shape them and how they can be upheld.

Robot Rules also discusses whether AI should be granted rights from a moral perspective, asking whether one day we might provide some AI systems with similar protections to those we give animals.

The need to prepare is urgent. Most people have failed to notice AI’s tightening grip, because companies carefully stagger the release of new technologies, gradually immersing their users.

Turner says: “Because of the natural psychological tendency not to notice a series of small changes, humans risk becoming like frogs in a restaurant. If you drop a live frog into a pot of boiling water it will try to escape.

“But if you place a frog in a pot of cold water and slowly bring it to the boil the frog will sit calmly, even as it is cooked alive.”

Turner explains that at present we have an unparalleled opportunity to build a new set of institutions and rules, to shape the world of AI before it is fully formed.

Robot Rules provides guidance to lawyers, computer scientists, engineers, ethicists, policy-makers, and all those who want to participate in this task.