Boston Dynamics has partnered with Google Cloud and Google DeepMind to integrate advanced artificial intelligence models into its industrial inspection platform, in a move the company says will significantly expand the capabilities of autonomous robots in real-world environments.
The collaboration brings Google’s Gemini and Gemini Robotics-ER 1.6 into Boston Dynamics’ Orbit software, specifically its AI Visual Inspection (AIVI) and AIVI-Learning systems, which analyze images captured by the company’s Spot robot.
According to Boston Dynamics, the integration is designed to enable robots to move beyond basic detection tasks and instead perform more complex reasoning about their surroundings.
From detection to reasoning in industrial inspections
Boston Dynamics says AIVI allows Spot to answer questions about a facility based on visual data – such as identifying whether a door has been left open or detecting hazards like spills.
In a demonstration shared by the company, the robot is shown performing a simple task – identifying and sorting objects – though the intended application is industrial inspection, where robots monitor equipment and site conditions.
The company says the addition of Gemini enables what it describes as more “holistic” site awareness, allowing facilities to identify risks earlier and automate tasks that would otherwise require human workers across multiple shifts.
Boston Dynamics writes in its blog that “industrial environments are incredibly complex, and the assets you manage require more than just basic object recognition”, adding that the new system is intended to deliver “higher-order reasoning and more complex visual analysis”.
DeepMind focuses on ‘embodied reasoning’
In a separate announcement, Google DeepMind introduced Gemini Robotics-ER 1.6 as a model specifically designed for robotics applications, emphasizing what it calls “embodied reasoning” – the ability for machines to interpret and act within the physical world.
DeepMind says the model improves a robot’s ability to understand spatial relationships, analyze multiple camera views, and determine whether a task has been successfully completed – a capability known as “success detection,” which is considered critical for autonomous operation.
The company also highlighted a new capability: instrument reading.
Industrial facilities rely on equipment such as pressure gauges, thermometers, and sight glasses, which require regular monitoring. DeepMind says the model can interpret these instruments visually, combining object detection, spatial reasoning, and contextual understanding to generate accurate readings.
“Capabilities like instrument reading and more reliable task reasoning will enable Spot to see, understand, and react to real-world challenges completely autonomously,” said Marco da Silva, vice president and general manager of Spot at Boston Dynamics, in comments included in the DeepMind blog.
Expanding industrial use cases
Boston Dynamics says the integration will expand the range of inspection tasks Spot can perform, including:
- Monitoring equipment such as gauges and conveyor systems
- Conducting safety and compliance checks, including 5S audits
- Detecting hazards such as leaks or debris
- Tracking materials and inventory movement
The company also claims the system can improve inspection accuracy and performance while enabling “zero-downtime upgrades,” where AI models are updated in the cloud without interrupting operations.
In addition, Boston Dynamics says the system provides “transparent reasoning”, allowing users to see how the AI arrives at its conclusions – a feature that may be important for industrial users concerned with accountability and decision-making.
Data and deployment considerations
Boston Dynamics notes that the AIVI-Learning system requires data sharing in order to function effectively, as the models are trained and adapted to individual facilities.
The company says customer data is used to improve the system’s performance but is shared only with Boston Dynamics.
The Gemini-powered AIVI-Learning system is now live for customers already using the platform, following its rollout earlier this month.
A broader shift toward AI-driven robotics
The partnership reflects a wider trend in robotics, where companies are increasingly combining physical machines with large-scale AI models to improve autonomy and adaptability.
Rather than relying solely on pre-programmed behaviors, newer systems are designed to interpret complex environments, make decisions, and learn over time – capabilities that are seen as essential for scaling robotics in industrial settings.
While Boston Dynamics and Google position the integration as a step toward more intelligent inspection systems, the real-world impact will likely depend on how reliably these AI models perform across diverse and unpredictable industrial environments.
