Robotics & Automation News

Market trends and business perspectives

Drones navigate unseen environments with liquid neural networks

MIT researchers exhibit a new advancement in autonomous drone navigation, using brain-inspired liquid neural networks that excel in out-of-distribution scenarios

In the vast, expansive skies where birds once ruled supreme, a new crop of aviators is taking flight. These pioneers of the air are not living creatures, but rather a product of deliberate innovation: drones.

But these aren’t your typical flying bots, humming around like mechanical bees. Rather, they’re avian-inspired marvels that soar through the sky, guided by liquid neural networks to navigate ever-changing and unseen environments with precision and ease.

Inspired by the adaptable nature of organic brains, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory introduced a method for robust flight navigation agents to master vision-based fly-to-target tasks in intricate, unfamiliar environments.

The liquid neural networks, which can continuously adapt to new data inputs, showed prowess in making reliable decisions in unknown domains like forests, urban landscapes, and environments with added noise, rotation, and occlusion.

These adaptable models, which outperformed many state of the art counterparts in navigation tasks, could enable potential real-world drone applications like search and rescue, delivery, and wildlife monitoring.

The recent study, published in Science Robotics, details how this new breed of agents can adapt to significant distribution shifts, a long-standing challenge in the field.

The team’s new class of machine learning algorithms, however, captures the causal structure of tasks from high-dimensional, unstructured data, such as pixel inputs from a drone-mounted camera.

These networks can then extract crucial aspects of a task (that is, understand the task at hand) and ignore irrelevant features, allowing acquired navigation skills to transfer targets seamlessly to new environments.

Daniela Rus, CSAIL director and MIT professor, says: “We are thrilled by the immense potential of our learning-based control approach for robots, as it lays the groundwork for solving problems that arise when training in one environment and deploying in a completely distinct environment without additional training.

“Our experiments demonstrate that we can effectively teach a drone to locate an object in a forest during summer, and then deploy the model in winter, with vastly different surroundings, or even in urban settings with varied tasks such as seeking and following.

“This adaptability is made possible by the causal underpinnings of our solutions. These flexible algorithms could one day aid in decision-making based on data streams that change over time, such as medical diagnosis and autonomous driving applications.

Droning on

A daunting challenge was at the forefront: do machine learning systems understand the task they are given from data when flying drones to an unlabeled object? And, would they be able to transfer their learned skill and task to new environments with drastic changes in scenery, such as flying from a forest to an urban landscape?

What’s more, unlike the remarkable abilities of our biological brains, deep learning systems struggle with capturing causality, frequently overfitting their training data and failing to adapt to new environments or changing conditions.

This is especially troubling for resource-limited embedded systems like aerial drones, that need to traverse varied environments and respond to obstacles instantaneously.

The liquid networks, in contrast, offer promising preliminary indications of their capacity to address this crucial weakness in deep learning systems. The team’s system was first trained on data collected by a human pilot, to see how they transferred learned navigation skills to new environments under drastic changes in scenery and conditions.

Unlike traditional neural networks that only learn during the training phase, the liquid neural net’s parameters can change over time, making them not only interpretable, but more resilient to unexpected or noisy data.

In a series of quadrotor closed-loop control experiments, the drones underwent range tests, stress tests, target rotation and occlusion, hiking with adversaries, triangular loops between objects, and dynamic target tracking. They tracked moving targets, and executed multi-step loops between objects in never-before-seen environments, surpassing performance of other cutting-edge counterparts.

The team believes that the ability to learn from limited expert data and understand a given task while generalizing to new environments could make autonomous drone deployment more efficient, cost-effective, and reliable.

Liquid neural networks, they noted, could enable autonomous air mobility drones to be used for environmental monitoring, package delivery, autonomous vehicles, and robotic assistants.

Dr Ramin Hasani, MIT CSAIL research scientist, says: “The experimental setup presented in our work tests the reasoning capabilities of various deep learning systems in controlled and straightforward scenarios.

“There is still so much room left for future research and development on more complex reasoning challenges for AI systems in autonomous navigation applications, which has to be tested before we can safely deploy them in our society.”

Alessio Lomuscio, PhD, Professor of AI Safety (in the Department of Computing) at Imperial College London, says: “Robust learning and performance in out-of-distribution tasks and scenarios are some of the key problems that machine learning and autonomous robotic systems have to conquer to make further inroads in society critical applications.

“In this context the performance of liquid neural networks, a novel brain-inspired paradigm developed by the authors at MIT, reported in this study is remarkable. If these results are confirmed in other experiments, the paradigm here developed will contribute to making AI and robotic systems more reliable, robust and efficient.”

Clearly, the sky is no longer the limit, but rather a vast playground for the boundless possibilities of these airborne marvels.

MIT Researchers Makram Chahine, Ramin Hasani, Patrick Kao, and Aaron Ray wrote the paper with former CSAIL researcher Ryan Shubert MEng ’22, Mathias Lechner, a postdoc at MIT CSAIL, Alexander Amini, a CSAIL postdoc, and MIT CSAIL Director Daniela Rus.

This research was supported in part by Schmidt Futures, the United States Air Force Research Laboratory, the United States Air Force Artificial Intelligence Accelerator, and the Boeing Company.

Print Friendly, PDF & Email

Leave a Reply