Microscan, a provider of barcode, machine vision, verification, and lighting technology, has demonstrated flexible in-line inspection using the Universal Robots collaborative UR5 robotic arm and Microscan’s latest machine vision smart camera platform.
The robotic demo, at Automate 2017, featured what the company claims is “the world’s smallest Ethernet smart camera”, was given in partnership with Olympus Controls, an engineering services company specializing in machine automation.
Bright Box, a specialist provider of connected car solutions, says its new self-driving car system has been trained using a neural network, deep learning and extreme racing computer games, in particular GTA 5.
Bright Box, a European company, says its software is the basis for connected-vehicle applications used by the Nissan Smart Car app in Middle East, and KIA Remoto app.
InVisage says its Spark4K near-infrared sensor brings cinematic resolution, high dynamic range, and low power Consumption to cameras and other devices
InVisage claims its Spark4K is the world’s highest resolution IR sensor with 35 per cent quantum efficiency at 940 nm, dynamic pixel sizing, global shutter, and up to “50 times less system power consumption” – less than what the company left us in the dark about.
InVisage Technologies, the pioneering developer of QuantumFilm camera sensors, launched the Spark4K near-infrared (NIR) camera sensor.
Exclusive interview with Claude Florin, CEO of Fastree3D, on helping robots finally see the light just that little bit better than they did before
How do robots see the world? Until now, most of them have had to make do with conventional digital cameras for eyes. In technological terms, these cameras are much like those available to consumers in the shops and, increasingly these days, in their smartphones. As clever as they are, and as high quality as the images turn out to be, these cameras only capture the image as a two-dimensional arrangement of pixels.
This means that a robot using such cameras would not able to perceive the three-dimensional space its “eyes” are looking at. This problem of perception – of perceiving 3D space as 2D space – is solved, or at least tackled, at the coding stage.