SoftBank and Yaskawa Electric have signed a memorandum of understanding to jointly develop “Physical AI” and deploy what the companies describe as a new class of robots that combine advanced decision-making with flexible, multi-purpose physical capabilities.
The collaboration brings together SoftBank’s AI-RAN initiative and Multi-access Edge Computing (MEC) platform with Yaskawa Electric’s robotics and motion-control technologies.
According to the press release, the companies aim to accelerate the “social implementation” of Physical AI as Japan faces labour shortages, demographic pressures, and increasingly complex business operations.
As a first step, the two companies have developed a use case for an office-oriented robot that integrates with building management systems and uses AI running on MEC.
Conventional robots are generally limited to narrow tasks, but SoftBank and Yaskawa say the new system allows real-time integration of multiple data streams, enabling robots to “flexibly perform a wider variety of tasks” and achieve “multi-skilled functionality”.
A demonstration of the system will take place at the 2025 International Robot Exhibition (iREX 2025) at Tokyo Big Sight from December 3 to 6.
Addressing Japan’s automation challenges
The companies frame the partnership in the context of Japan’s urgent need for greater automation in settings where humans and machines must coexist, including offices, schools, hospitals and retail spaces.
The press release notes that these environments require “complex decision-making and flexible responses to unpredictable situations,” making conventional automation difficult to deploy.
Through the integration of SoftBank’s AI-RAN with Yaskawa’s robotics, the companies aim to “build a domestically developed AI infrastructure and develop new solutions in the field of physical AI within Japan”.
The goal is to enable robots to carry out more advanced reasoning and expand the scope of tasks they can safely perform in human environments.
Technology roles and system design
Yaskawa Electric is contributing its long-standing expertise in precision motion control and industrial robotics. The company is developing an autonomous robot called Motoman Next, designed to use AI to achieve more advanced decision-making and operational flexibility.
SoftBank, meanwhile, is providing the MEC environment and AI-RAN capabilities. Its edge-computing platform enables low-latency processing of large volumes of sensor and camera data, giving robots what SoftBank calls an “external perspective” for interpreting their surroundings.
SoftBank has also developed a Vision-Language Model (VLM) that acts as the MEC-based AI responsible for task generation.
Yaskawa has developed a complementary Vision-Language Action (VLA) system that functions as Robot AI, translating the VLM’s instructions into specific robotic movements.
The companies say the system architecture includes a next-generation building management system, a MEC AI layer, and Robot AI, all integrated to coordinate real-time decision-making.
This enables scenarios such as a robot identifying and retrieving a specific smartphone from an office shelf and responding to unexpected events inside a building.
Toward multi-skilled robots in everyday workplaces
The collaboration “goes beyond conventional automation and digitalization frameworks” by merging robotics with advanced communication infrastructure, according to the press release. The companies say they intend to demonstrate how one robot can perform multiple roles traditionally requiring several specialised machines.
The use case will be shown at Yaskawa’s booth during iREX 2025. SoftBank and Yaskawa say they plan to continue developing technologies that allow humans and robots to work “safely and in collaboration within the same space”, supported by AI and communication technologies that broaden the range of tasks robots can perform.
