By the time you finish reading this sentence, another humanoid robot will have rolled off a production line somewhere in China. That is not hyperbole.
On March 30, 2026, Shanghai-based Agibot announced it had produced its 10,000th humanoid robot – a milestone the company reached after scaling from 5,000 to 10,000 units in just three months. Meanwhile, rival UBTech plans to ramp up output to 5,000 units in 2026 and 10,000 in 2027.
The humanoid robot industry is no longer a futuristic fantasy. It is a mass-production reality unfolding right now – in Chinese factories, logistics centers, and increasingly, commercial spaces like the McDonald’s in Shanghai that recently began testing robot servers.
But here’s the uncomfortable question that comes with this rapid scaling: What happens when something goes wrong?
Industrial incidents highlight safety risks
Concerns about robot safety are not theoretical. Incidents involving industrial robots, while relatively rare, illustrate what can happen when complex systems fail or behave unpredictably.
In July 2023, Peter Hinterdobler, a technician at Tesla’s Fremont plant, was injured while working on a robot that had been moved from its usual position on the Model 3 production line. According to reports, the robot arm activated unexpectedly during maintenance, striking him and causing serious injuries. He is now pursuing legal action against Tesla and robot manufacturer Fanuc.
Other incidents have also been reported in recent years. In 2021, a Tesla engineer at the company’s Texas facility was injured during an interaction with a robot on the factory floor. Earlier, in 2015, a worker at an automotive parts plant in Michigan died following an accident involving an industrial robot.
These cases occurred in controlled industrial environments, where robots typically operate behind safety barriers and under strict procedures. Even in such settings, however, failures can occur due to a combination of mechanical issues, software errors, or human factors.
The emergence of humanoid robots introduces a different set of challenges. Unlike traditional industrial machines, humanoids are designed to operate in closer proximity to people, often without physical separation. As their capabilities expand – in strength, mobility, and autonomy – ensuring predictable and safe behavior becomes increasingly important.
The new rulebook
This is precisely why China’s Ministry of Industry and Information Technology (MIIT) published its first national standard system for humanoid robots and embodied intelligence in late February 2026.
Formally known as the “Humanoid Robot and Embodied Intelligence Standard System (2026 Edition)”, it is organized around six pillars:
- foundational standards;
- neuromorphic computing;
- limbs and components;
- system integration;
- application scenarios; and
- safety and ethics.
The framework was developed by the MIIT’s Humanoid Robots and Embodied Intelligence Standardization Technical Committee (HEIS, designation MIIT/TC8), a body comprising over 120 researchers, executives, and policymakers from leading robotics firms, research institutes, and industry users.
Liang Liang, the committee’s secretary-general, described the vision succinctly. By unifying technical specifications, the system is intended to “reduce coordination and adaptation costs across the industrial chain, promote modularization, and avoid low-level redundant work”. But beneath that technocratic language lies a more urgent goal: making powerful machines safe before they enter our homes.
The safety framework – What’s actually in the standard
The standard tackles safety on three levels:
- Physical Safety (Hardware): Mandates specifications for structural integrity, emergency stop mechanisms, thermal management (preventing batteries from overheating), and force limiting – ensuring a robot arm cannot crush a human finger.
- Behavioral Safety (Software): Requires that robots have predictable responses to failure, a concept known as “minimum risk condition”. If a robot loses connection to its control system or encounters an unfamiliar situation, it must default to a safe state – freezing in place or slowly lowering its arms – rather than thrashing unpredictably.
- Ethical & Operational Safety: As Liang Liang told China Daily, with humanoid robots set to enter “thousands of households, safety will be the primary factor”. The framework includes guidelines on when a robot can make autonomous decisions versus when human intervention is required.
Yet even the architects of the standard acknowledge its limits. Peng Zhihui, co-founder of Agibot and a deputy director of the HEIS committee, noted that in industrial scenarios, “nearly 80 percent of tasks where humans excel but traditional automation struggles are strongly related to tactile sensing” – and the lack of standardized tactile sensors remains a critical bottleneck.
In other words, a robot might know it shouldn’t crush a human hand, but without reliable tactile sensing, it may not realize it is crushing one until it is too late.
A levels system for humanoids? It already exists
One of the most successful standards in technology today is the SAE International’s J3016 standard for autonomous driving – the “Levels 0 to 5” framework that gives consumers an intuitive understanding of how much control a car cedes to its computer. (SAE, by the way, is US-based – the Society of Automotive Engineers, founded in 1905.)
In May 2025, nearly a year before the national framework was released, the Beijing Humanoid Robot Innovation Center (backed by MIIT) published what is believed to be the world’s first Humanoid Robot Intelligence Grading standard (T/CIE 298-2025). It uses what it calls a “Four-Dimension, Five-Level” framework:
The Four Dimensions:
- P – Perception & Cognition: Can the robot sense and understand its environment?
- D – Decision & Learning: Can it plan tasks and learn from experience?
- E – Execution & Performance: Can it move precisely, balance, and manipulate objects?
- C – Collaboration & Interaction: Can it work safely with humans and other robots?
The Five Levels (L1–L5):
| Level | Name | What It Means |
|---|---|---|
| L1 | Basic Capability | Simple, pre-programmed actions; no adaptation |
| L2 | Perception Capability | Can sense environment but limited decision-making |
| L3 | Conditional Autonomy | Can handle specific tasks autonomously under supervision |
| L4 | High Autonomy | Operates independently in defined scenarios; human backup available |
| L5 | Full Autonomy | Complete independence in any environment; no human needed |
The standard includes 22 primary indicators and more than 100 technical provisions, along with a “universal safety baseline” and mappings to typical application scenarios. The creators explicitly acknowledged borrowing from autonomous vehicle grading logic – but adapted it for machines that walk, grasp, and share space with humans.
A Proposal: Adding ‘Level 0’ – and why harm potential matters
The SAE framework includes Level 0 (No Automation) – a car where the human driver does everything. For humanoids, Level 0 would represent a fully mechanical robot with no autonomous behavior whatsoever, operated entirely by remote control or pre-programmed sequences.
This gives the public a baseline they can psychologically identify with and feel they understand. It is the “dumb” robot – safe because it cannot act independently.
But autonomy is only half the risk equation. The other half is harm potential – how much damage the robot could cause if something goes wrong.
An L5 (fully autonomous) household companion robot that weighs 10 kilograms and moves slowly poses a very different risk from an L5 industrial humanoid that can lift 50 kilograms and sprint at 15 km/h.
Realistically, no commercial robot manufacturer would voluntarily accept a public “harm potential” rating – it would terrify buyers. But regulators need to think in these terms.
A two-axis model – Autonomy Level (L0-L5) and Harm Potential (H1-H3) – would give safety inspectors a framework for certification. An L5-H3 robot (fully autonomous, high harm potential) would require redundant emergency stops, mandatory human-supervised testing, and perhaps even geofencing to prevent operation in public spaces.
The geopolitics of standards
Standards are never just technical documents. They are also strategic tools in global competition. A report from the Netherlands Institute of International Relations (Clingendael) notes that China has transformed “from a reactive standards-taker into a proactive standards-maker” since 2018, using initiatives like “China Standards 2035” to embed Chinese technical specifications into global supply chains.
Can standards be used to lock out foreign competitors? Not directly – that would violate WTO rules on technical barriers to trade. You cannot refuse approval to a foreign company that demonstrably meets the standard.
But you can make the process of proving compliance expensive, time-consuming, and reliant on local testing facilities. You can also design standards around proprietary technologies in which Chinese firms hold key patents.
This is why mutual recognition agreements – like those under the Belt and Road Initiative’s “standards connectivity” program – are so strategically important. The country that writes the rulebook often wins the race, even if the rules themselves are openly published.
No guarantees – but a framework
Can the new Chinese standard guarantee that a humanoid robot will never crush a human skull? No. No standard can offer an absolute guarantee. Machines fail. Software has bugs. Humans make mistakes.
What the standard can do is create a transparent framework for assessing risk, comparing capabilities, and holding manufacturers accountable. It can mandate that robots have emergency stops, force limits, and predictable failure modes. It can require testing and documentation. It can – and does – set a universal safety baseline that all manufacturers must meet.
As Wang Xingxing, founder and CEO of Unitree Robotics and a deputy director of the HEIS committee, put it: “To enable humanoid robots to genuinely work, particularly on long-sequence tasks, industry-wide standards are absolutely essential.”
With production ramping from thousands to tens of thousands – Agibot’s production acceleration from 5,000 to 10,000 units took just three months – the industry has no time to waste. The robots are coming. The only question is whether we will have the rules in place before they arrive.
