Automated fluid dispensing systems are widely used in industries today. They make the delivery and distribution of fluids easier and more precise. [Read more…] about What is an automated fluid dispensing system?
Features
ZF partners with Beep to bring autonomous shuttle to US market
ZF has unveiled its next generation shuttle for autonomous driving in urban environments and mixed traffic at the 2023 Consumer Electronics Show (CES) in Las Vegas/ Nevada, USA. The next generation complements the established model, which is primarily designed for use in segregated lanes.
For the new shuttle generation ZF announces a strategic partnership with US mobility services provider Beep. The agreement aims to deliver several thousand shuttles to customers over the coming years, combining ZF’s ATS with Beep’s mobility services and service management platform into a single-source autonomous mobility solution.
Torsten Gollewski, executive vice president, autonomous mobility systems at ZF, says: “In order to reduce traffic-related emissions in metropolitan areas, a reduction in motorized individual transport and a simultaneous expansion of more sustainable, efficient, comfortable, and affordable mobility options are required.” [Read more…] about ZF partners with Beep to bring autonomous shuttle to US market
Intel Labs introduces SPEAR: An open-source photorealistic simulator for embodied AI
By Mike Roberts, a research scientist at Intel Labs, where he works on using photorealistic synthetic data for computer vision applications
Interactive simulators are becoming powerful tools for training embodied artificial intelligence (AI) systems, but existing simulators have limited content diversity, physical interactivity, and visual fidelity.
To better serve the embodied AI developer community, Intel Labs has collaborated with the Computer Vision Center in Spain, Kujiale in China, and the Technical University of Munich to develop the Simulator for Photorealistic Embodied AI Research (SPEAR).
This highly realistic simulation platform helps developers to accelerate the training and validation of embodied agents for a growing set of tasks and domains.
With its large collection of photorealistic indoor environments, SPEAR applies to a wide range of household navigation and manipulation tasks. Ultimately, SPEAR aims to drive research and commercial applications in household robotics and manufacturing, including human-robot interaction scenarios and digital twin applications.

To create SPEAR, Intel Labs worked closely with a team of professional artists for over a year to construct a collection of high-quality, handcrafted, interactive environments. Currently, SPEAR features a starter pack of 300 virtual indoor environments with more than 2,500 rooms and 17,000 objects that can be manipulated individually.
These interactive training environments use detailed geometry, photorealistic materials, realistic physics, and accurate lighting. New content packs targeting industrial and healthcare domains will be released soon.
By offering larger, more diverse, and realistic environments, SPEAR helps throughout the development cycle of embodied AI systems, and enables training robust agents to operate in the real world, potentially even straight from simulation.
SPEAR helps to improve accuracy on many embodied AI tasks, especially traversing and rearranging cluttered indoor environments. Ultimately, SPEAR aims to decrease the time to market for household robotics and smart warehouse applications, and increase the spatial intelligence of embodied agents.
Challenges in Training and Validating Embodied AI Systems
In the field of embodied AI, agents learn by interacting with different variables in the physical world. However, capturing and compiling these interactions into training data can be time consuming, labor intensive, and potentially dangerous.
In response to this challenge, the embodied AI community has developed a variety of interactive simulators, where robots can be trained and validated in simulation before being deployed in the physical world.
While existing simulators have enabled rapid progress on increasingly complex and open-ended real-world tasks such as point-goal and object navigation, object manipulation, and autonomous driving, these sims have several limitations.
Simulators that use artist-created environments typically provide a limited selection of unique scenes, such as a few dozen homes or a few hundred isolated rooms, which can lead to severe over-fitting and poor sim-to-real transfer performance.
On the other hand, simulators that use scanned 3D environments provide larger collections of scenes, but offer little or no interactivity with objects.
In addition, both types of simulators offer limited visual fidelity, either because it is too labor intensive to author high-resolution art assets, or because of 3D scanning artifacts.

Overview of SPEAR
SPEAR was designed based on three main requirements:
- support a collection of environments that is as large, diverse, and high-quality as possible;
- provide sufficient physical realism to support realistic interactions with a wide range of household objects; and
- offer as much photorealism as possible, while still maintaining enough rendering speed to support training complex embodied agent behaviors.
Motivated by these requirements, SPEAR was implemented on top of the Unreal Engine, which is an industrial-strength open-source game engine. SPEAR environments are implemented as Unreal Engine assets, and SPEAR provides an OpenAI Gym interface to interact with environments via Python.

SPEAR currently supports four distinct embodied agents:
- The OpenBot Agent provides identical image observations to a real-world OpenBot, implements an identical control interface, and has been modeled with accurate geometry and physical parameters. It is well-suited for sim-to-real experiments.
- The Fetch Agent and LoCoBot Agent have also been modeled using accurate geometry and physical parameters, and each has a physically realistic gripper. These agents are ideal for rearrangement tasks.
- The Camera Agent can be teleported anywhere, making it useful for collecting static datasets.
Figure 3. The LoCoBot Agent is suitable for both navigation and manipulation in simulation. This agent’s realistic gripper makes it ideal for rearrangement tasks.
By default, agents return photorealistic egocentric observations from camera sensors, as well as wheel encoder states and joint encoder states. Additionally, agents can optionally return several types of privileged information.
First, agents can return a sequence of waypoints representing the shortest path to a goal location, as well as GPS and compass observations that point directly to the goal, both of which can be useful when defining navigation tasks.
Second, agents can return pixel-perfect semantic segmentation and depth images, which can be useful when controlling for the effects of imperfect perception in downstream embodied tasks and collecting static datasets.
SPEAR currently supports two distinct tasks:
- The Point-Goal Navigation Task randomly selects a goal position in the scene’s reachable space, computes a reward based on the agent’s distance to the goal, and triggers the end of an episode when the agent hits an obstacle or the goal.
- The Freeform Task is an empty placeholder task that is useful for collecting static datasets.
SPEAR is available under an open-source MIT license, ready for customization on any hardware. For more details, visit the SPEAR GitHub page.
Collaborative robots. Co-operative robots. What’s the difference?
Collaborative robots, also commonly known as cobots, refer to robots that work side-by-side with human operators. Unlike industrial robots that are separated from human operators, cobots share the same working space as humans.
As a relatively new definition, co-operative robots sound like collaborative robots but represents a different nature of the operation. Co-operative robots refer to industrial robots with a virtual fence between human operators and themselves.
In terms of the product position, the co-operative robot stands in the middle of industrial robots and cobots, combining the benefits of both industrial robots and collaborative robots with the aid of safety sensors (laser scanners typically). [Read more…] about Collaborative robots. Co-operative robots. What’s the difference?
Reinforcing the value of simulation: Teaching dexterity to a real robot hand
Nvidia researchers show how training in simulation enables the transfer of complex manipulation skills to a robot hand with project DeXtreme
The human hand is one of the most remarkable outcomes of millions of years of evolution. The ability to pick up all sorts of objects and use them as tools is a crucial differentiator allowing us to shape the world around us.
For robots to work in the everyday human world, the ability to deftly interact with our tools and the environment around them is critical. Without that capability, they will continue to be useful only in specialized domains such as factories or warehouses.
While it has been possible to teach robots with legs how to walk for some time, robots with hands have generally proven to be much trickier to control. A hand with fingers has more joints, and they must move in specific coordinated ways to accomplish a given task. [Read more…] about Reinforcing the value of simulation: Teaching dexterity to a real robot hand
Smart actuators help robot valets optimize high-volume automobile storage and retrieval
Before a newly manufactured car is delivered to a dealership, it typically joins thousands of others in a large storage lot where it can sit for days or even months.
Managing these lots requires many manual, time-intensive tasks: attendants must start each car, park it safely in a designated area, keep track of its location and be able to access it easily for distribution when needed.
Vehicle remarketers, airports, auto rental services and many other businesses that require high-volume auto storage and easy retrieval share these same issues. Ultimately, manual auto parking results in inefficient space and labour utilization, risk of accident and injury to attendants, and burning of fuel. [Read more…] about Smart actuators help robot valets optimize high-volume automobile storage and retrieval
Arrow partners with Recom to identify ‘optimal power supply solution’ for robots
Automating industrial processes using robots has typically been the preserve of large companies that have the resources to make the investment and see the project through to completion.
Robco – the Robot Company – is now making the benefits of advanced automation more easily accessible for small-to-medium enterprises (SMEs) with its new approach to building industrial robots.
Constantin Dresel, Robco’s head of development, says: “We want to redefine automation in SMEs with our powerful modular construction kit. This concept allows us to very quickly deliver a customised solution that can be ready to use immediately without any complex programming or reliance on specialised skills.” [Read more…] about Arrow partners with Recom to identify ‘optimal power supply solution’ for robots
How Robotics Can Help Manufacturers Solve Labor Shortages
Manufacturing is critical to the United States and the Midwest. It’s responsible for creating jobs, supporting businesses, and generating economic growth. Manufacturing is also a key driver of innovation, helping to create new products and technologies that can improve our lives.
Nationwide, manufacturing was the fourth-largest employment sector in 2021, with more than 12 million employees. In the Midwest, it’s the largest employment sector. But like many other industries, manufacturers are facing a major challenge – a shortage of skilled workers.
Even if every skilled worker in the country was employed in manufacturing, Forbes reports there would still be 35 percent more job openings than workers capable of filling them. By 2030, Deloitte predicts the industry will be short by more than 2 million workers. [Read more…] about How Robotics Can Help Manufacturers Solve Labor Shortages
Denso increases efficiencies with fleet of MiR250 autonomous mobile robots
Denso, one of the world’s largest automotive technology suppliers, says it has “increased efficiencies, improved employee morale and ergonomics, and managed a tight labor market” by deploying six MiR250 autonomous mobile robots to transport materials in its 800,000 square foot powertrain component production facility in Athens, Tennessee.
According to a new case study from Mobile Industrial Robots (MiR), Denso has successfully executed more than 500,000 missions since deploying its first MiR robot in 2020, recognizing a return on investment (ROI) in less than a year along with an ongoing need for more AMRs for additional logistics applications.
One of MiR’s largest global customers, Denso has AMRs from MiR running in two other US locations, as well as three facilities in Europe and two in Asia. [Read more…] about Denso increases efficiencies with fleet of MiR250 autonomous mobile robots
Mercedes-Benz and Bosch launch ‘world’s first’ driverless parking system approved for commercial use
Mercedes-Benz and Bosch have reached an important milestone on the way to automated driving: Germany’s Federal Motor Transport Authority has approved their highly automated parking system for use in the P6 parking garage run by Apcoa at Stuttgart Airport.
This makes it the world’s first highly automated driverless parking function (SAE Level 4) to be officially approved for commercial use. The technological advancement of automated driving plays a key role in the mobility of the future.
With the vehicle and infrastructure taking over driving and manoeuvring, drivers will be able to turn their attention to other things, instead of time spent looking for a parking space and manoeuvring in tight parking garages. [Read more…] about Mercedes-Benz and Bosch launch ‘world’s first’ driverless parking system approved for commercial use









