By Nohtal Partansky, founder and CEO, and Patrick DeGrosse Jr, director of engineering, Sorting Robotics
Most robotics deployments don’t run into trouble because a motor was undersized or because someone wrote bad control logic. In our experience, the problems usually show up after the system leaves validation and starts operating in an environment that behaves nothing like the one it was tested in.
In a lab, conditions are stable and closely observed. The air is relatively clean, operators are attentive and specifically trained, and throughput is controlled. If something drifts out of calibration, it is usually caught early because the entire purpose of the environment is observation and testing. Production environments are built around output, not observation.
Once a system moves into a 24/7 facility, particularly in a regulated setting, the priorities shift. The machine is expected to keep up with volume targets, staffing rotates, and environmental variables that were minor during testing become persistent realities. That shift changes how small technical issues evolve into more serious problems over time.
Lab assumptions don’t survive continuous production
There is a common assumption that if a robot performs consistently during validation, it will continue performing that way once scaled. In practice, that assumption erodes over the first several months of continuous operation.
Extended runtime introduces slow-moving variables that are difficult to replicate in controlled testing. Fine particulates accumulate in places that did not present issues during short test cycles. Viscous materials behave differently after ten hours of operation than they do after one.
Temperature and humidity fluctuations across weeks begin to influence calibration stability in subtle ways. None of these factors typically cause immediate failure.
Systems that performed flawlessly during commissioning can begin to widen their output variance once exposed to sustained production volume. Bearings remain intact and sensors continue reporting within acceptable ranges, yet calibration offsets slowly increase.
In regulated cannabis production, where dosing tolerances are defined and enforced, that widening matters. A system can be mechanically functional while gradually trending toward noncompliance. Mechanical stability does not automatically equal regulatory stability, particularly when tolerance thresholds are narrow and production volume magnifies even small deviations.
Testing demonstrates capability. Continuous production reveals durability.
Uptime is an operational discipline
When uptime becomes the focus, discussions tend to move toward larger motors, added redundancy, or reinforced components. Those changes have their place. Over time, though, reliability is shaped more by how the system is maintained and operated than by how aggressively it was specified on paper.
We have deployed identical systems in facilities operating at comparable volumes and watched their performance diverge over time. The difference was about attention to detail.
One site kept to its cleaning schedule even during peak demand, replaced wear parts before failure, and investigated small alerts while they were still small. At another site running similar volume, maintenance was postponed to keep lines moving, cleaning became inconsistent, and warning messages were treated as background noise until something forced attention.
Always-on production does not eliminate downtime; it requires structured downtime. In regulated environments, skipping maintenance does more than increase mechanical risk. It increases the probability that contamination, calibration drift, or wear will affect product output before the deviation becomes visible in reporting.
Hardware robustness contributes to uptime. Maintenance discipline sustains it.
Where deployments quietly break
Technical failures are visible and usually prompt immediate action. Human variability is more gradual.
During installation, there is often a primary operator who understands the system at a deeper level because they participated in training and commissioning. As staffing changes over time, knowledge transfer becomes informal. Documentation exists, but practical understanding erodes as new operators focus on keeping the line moving.
No single shortcut destabilizes a system. However, small deviations in cleaning routines, recalibration schedules, or inspection habits accumulate. Facilities that maintain long-term performance assign ownership clearly.
The sites that stay stable usually have a clear point person who understands more than just the start and stop buttons. That individual knows the machine’s capabilities and limits, keeps an eye on maintenance cycles, and makes sure validation steps are not skipped when things get busy.
Failures rarely begin with a breakdown. They begin with gradual drift that goes unaddressed.
Drift is a compliance and business risk
In high-throughput environments, small calibration shifts compound quickly. If thousands of units are produced per shift, even fractional variance has measurable impact. Regulatory frameworks define acceptable output ranges, and remaining within those ranges requires systematic monitoring rather than assumption.
Drift rarely presents as a sudden event. It accumulates through extended exposure, minor wear, and inconsistent recalibration. For that reason, monitoring architecture, defined recalibration intervals, and clear audit trails need to be incorporated during system design rather than added after scale is achieved. Visibility into variance trends is as important as the motion platform itself.
Rethinking rigid architectures
It is not unusual to see production teams commit fully to ecosystems built by vendors such as Siemens or Rockwell Automation. Once a plant has invested years into a specific controls platform, retraining staff and shifting tooling becomes a real operational hurdle.
That approach has operational benefits, especially in established industries. At the same time, when regulatory requirements or workflows begin shifting, tightly coupled systems can be slower to adapt than expected.
In deployments built with modular systems that integrate best-fit components through stronger software and monitoring layers, long-term adjustments tend to be more manageable.
A motion platform from a manufacturer such as Kawasaki Robotics may be appropriate for a specific application, but deployment stability depends equally on how calibration data, environmental feedback, and compliance metrics are integrated and tracked across the system.
Taking a modular approach places more responsibility on the internal engineering team to manage integration and validation. The tradeoff is flexibility when production demands or compliance standards change.
Build systems for the reality of production
Continuous operation by itself isn’t what causes most problems. The friction shows up when systems are designed around lab assumptions and then placed into environments defined by contamination, staffing turnover, fluctuating throughput, and regulatory oversight. In high-volume, regulated facilities, those factors are part of daily reality rather than edge cases.
In deployments that last, teams plan for these challenges from day one. Operators schedule maintenance, track calibration, and build processes that make compliance visible in everyday work. Clear responsibilities and consistent routines become as critical as the machine parts themselves.
Robotics reliability depends not just on motors or sensors. It requires designing systems to operate effectively in the real environment, where people, processes, and conditions matter as much as the machines themselves.
The important thing to remember is the extraordinary improvements in efficiency and consistency that robotics can bring result in major product improvement and cost savings. When looked at through that lens, a little extra time in training, maintenance, and monitoring are a small price to pay for those benefits.


About the authors: Former NASA-JPL engineer Nohtal Partansky is the CEO of Sorting Robotics, and was previously lead engineer on NASA’s SOXE Assembly for the MOXIE instrument on M2020 – cradle to grave RDT&E for a device that produced oxygen on the surface of Mars. Patrick DeGrosse Jr, director of engineering at Sorting Robotics, has 20 years of NASA experience leading local and international teams from cradle-to-grave to put mechanisms on Mars over multiple flagship flight projects.
