Computer games-making engine Unity introduces robotics simulation system
Unity, a software platform for making computer games, has unveiled a robotic simulation system.
Unity is one of the most popular platforms for creating and operating real-time 3D games and other digital content and its new functionality is called Object Pose Estimation demonstration, which the company says “combines the power of computer vision and simulation technologies”, adding that it illustrates how Unity’s artificial intelligence and machine learning capabilities are “having real-world impact on the use of robotics in industrial settings”.
Object Pose Estimation and its corresponding demonstration come on the heels of recent releases aimed at supporting the eminent Robot Operating System (ROS), a flexible framework for writing robot software.
The combinations of these Unity tools and others open the door for roboticists to safely, cost-effectively, and quickly explore, test, develop, and deploy solutions.
Dr Danny Lange, senior vice president of artificial intelligence, Unity, says: “This is a powerful example of a system that learns instead of being programmed, and as it learns from the synthetic data, it is able to capture much more nuanced patterns than any programmer ever could.
“Layering our technologies together shows how we are crossing a line, and we are starting to deal with something that is truly AI, and in this case, demonstrating the efficiencies possible in training robots.”
Simulation technology is highly effective and advantageous when testing applications in situations that are dangerous, expensive, or rare. Validating applications in simulation before deploying to the robot shortens iteration time by revealing potential issues early.
The combination of Unity’s built-in physics engine and the Unity Editor can be used to create endless permutations of virtual environments, enabling objects to be controlled by (an approximation) of the forces which act on them in the real world.
The Object Pose Estimation demo succeeds the release of Unity’s URDF Importer, an open-source Unity package for importing a robot into a Unity scene from its URDF file that takes advantage of enhanced support for articulations in Unity for more realistic kinematic simulations, and Unity’s ROS-TCP-Connector, which greatly reduces the latency of messages being passed between ROS nodes and Unity, allowing the robot to react in near real-time to its simulated environment.
Today’s demo builds on this work by showing how Unity Computer Vision tools and the recently released Perception Package can be used to create vast quantities of synthetic, labelled training data to train a simple deep learning model to predict a cube’s position.
The demo provides a tutorial on how to recreate this project, which can be extended by applying tailored randomizers to create more complex scenes.
Lange says: “With Unity, we have not only democratized data creation, we’ve also provided access to an interactive system for simulating advanced interactions in a virtual setting.
“You can develop the control systems for an autonomous vehicle, for example, or here for highly expensive robotic arms, without the risk of damaging equipment or dramatically increasing cost of industrial installations.
“To be able to prove the intended applications in a high-fidelity virtual environment will save time and money for the many industries poised to be transformed by robotics combined with AI and Machine Learning.”