Agibot has unveiled a new generation of embodied AI robots and foundation models, signaling what the company describes as a shift toward large-scale deployment of “physical AI” across industrial and commercial environments.
The announcements were made at the company’s 2026 Partner Conference, where Agibot introduced multiple robotic platforms and AI systems built around its “One Robotic Body, Three Intelligences” architecture, designed to integrate motion, manipulation, and human interaction into a unified system.
Peng Zhihui, co-founder, president and CTO of Agibot, said the industry is moving beyond experimentation toward practical deployment.
“Embodied intelligence is no longer a concept, it is becoming a new form of productive infrastructure,” Zhihui said.
“We are moving embodied intelligence from laboratory curiosity to production-line reality, enabling robots to truly integrate into human workflows and create measurable value across major scenarios.”
New robotic platforms target real-world use cases
Agibot introduced several new robotic systems aimed at different applications, from industrial operations to retail and inspection.
Among them is the A3 humanoid robot, designed for interactive environments such as entertainment and customer engagement. The system features lightweight construction, extended operating time, and multi-robot coordination capabilities.
The company also presented the G2 Air, a compact mobile manipulator designed for human-robot collaboration in environments such as logistics, retail, and hospitality. The system combines task execution with real-time data collection, which the company says supports ongoing AI training.
In manipulation, Agibot expanded its OmniHand range, including the OmniHand 3 Ultra-T, a dexterous robotic hand designed for precision tasks, alongside industrial and ruggedized variants aimed at broader deployment.
For field and industrial operations, the company introduced the D2 Max quadruped robot, designed for inspection, security, and emergency response scenarios. Agibot says the system is capable of autonomous operation across complex terrain.
AI models and data systems underpin platform
Alongside the hardware, Agibot unveiled eight AI models spanning locomotion, manipulation, and interaction, forming what it describes as a unified physical AI platform.
These include systems for learning from demonstrations, generating motion from multimodal inputs, and improving task execution through real-world data and simulation. The company also introduced MEgo, a data collection system designed to generate training data without relying on robotic hardware.
The models are designed to operate in a closed-loop system, continuously improving through deployment and feedback from real-world operations.
Full-stack ecosystem for scaling deployment
Agibot also highlighted its broader software ecosystem, including operating systems, development tools, and simulation platforms aimed at simplifying deployment and customization.
The company says these tools are intended to lower barriers for adoption and enable partners to build and scale applications more quickly across industries.
Focus on deployment over demonstration
The announcements reflect a broader trend in the robotics sector, where companies are increasingly focusing on measurable outcomes rather than demonstrations of capability.
Agibot says it has already deployed hundreds of robots across multiple projects and is expanding its partner ecosystem to support wider adoption.
While many of the technologies remain at an early stage, the company’s strategy points toward a future in which embodied AI systems are integrated into everyday industrial workflows, moving from standalone machines to coordinated, data-driven systems designed to deliver productivity gains at scale.
Main image: Peng Zhihui, co-founder, president and CTO of Agibot, demonstrates interactive intelligence with Agibot X2 (PRNewsfoto/Agibot)
