Robotics & Automation News

Market trends and business perspectives

Nvidia’s predictions for 2024: What’s Next in AI?

Richard Kerris

Vice President of Developer Relations, Nvidia Head of Media and Entertainment

The democratization of development: Virtually anyone, anywhere will soon be set to become a developer. Traditionally, one had to know and be proficient at using a specific development language to develop applications or services.

As computing infrastructure becomes increasingly trained on the languages of software development, anyone will be able to prompt the machine to create applications, services, device support and more.

While companies will continue to hire developers to build and train AI models and other professional applications, expect to see significantly broader opportunities for anyone with the right skill set to build custom products and services.

They’ll be helped by text inputs or voice prompts, making interactions with computers as simple as verbally instructing it.

“Now and Then” in film and song: Just as the “new” AI-augmented song by the Fab Four spurred a fresh round of Beatlemania, the dawn of the first feature-length generative AI movie will send shockwaves through the film industry.

Take a filmmaker who shoots using a 35mm film camera. The same content can soon be transformed into a 70mm production using generative AI, reducing the significant costs involved in film production in the IMAX format and allowing a broader set of directors to participate.

Creators will transform beautiful images and videos into new types and forms of entertainment by prompting a computer with text, images or videos. Some professionals worry their craft will be replaced, but those issues will fade as generative AI gets better at being trained on specific tasks.

This, in turn, will free up hands to tackle other tasks and provide new tools with artist-friendly interfaces.

Rev Lebaredian

Vice President of Nvidia Omniverse and Simulation Technology

Industrial digitalization meets generative AI: The fusion of industrial digitalization with generative AI is poised to catalyze industrial transformation.

Generative AI will make it easier to turn aspects of the physical world – such as geometry, light, physics, matter and behavior – into digital data.

Democratizing the digitalization of the physical world will accelerate industrial enterprises, enabling them to design, optimize, manufacture and sell products more efficiently.

It also enables them to more easily create virtual training grounds and synthetic data to train a new generation of AIs that will interact and operate within the physical world, such as autonomous robots and self-driving cars.

3D interoperability takes off: From the drawing board to the factory floor, data for the first time will be interoperable.

The world’s most influential software and practitioner companies from the manufacturing, product design, retail, e-commerce and robotics industries are committing to the newly established Alliance for OpenUSD.

OpenUSD, the universal language between 3D tools and data, will break down data siloes, enabling industrial enterprises to collaborate across data lakes, tool systems and specialized teams easier and faster than ever to accelerate the digitalization of previously cumbersome, manual industrial processes.

Bob Pette

Nvidia Vice President of Enterprise Platforms

Building anew with generative AI: Generative AI will allow organizations to design cars by simply speaking to a large language model or create cities from scratch using new techniques and design principles.

The architecture, engineering, construction and operations (AECO) industry is building the future using generative AI as its guidepost.

Hundreds of generative AI startups and customers in AECO and manufacturing will focus on creating solutions for virtually any use case, including design optimization, market intelligence, construction management and physics prediction.

AI will accelerate a manufacturing evolution that promises increased efficiency, reduced waste and entirely new approaches to production and sustainability.

Developers and enterprises are focusing in particular on point cloud data analysis, which uses lidar to generate representations of built and natural environments with precise details. This could lead to high-fidelity insights and analysis through generative AI-accelerated workflows.

Generative AI also offers enterprises a significant opportunity to gain new insights from their existing data.

By customizing pre-trained foundation models with techniques like fine-tuning and retrieval augmented generation (RAG), organizations can harness the transformative power of generative AI for domain-specific tasks to improve decision-making and develop a competitive edge.

To capture this opportunity and accelerate adoption of generative AI, enterprises will need a trusted pathway to design and implement scalable, efficient, and reliable infrastructure.

Deepu Talla

Nvidia Vice President of Embedded and Edge Computing

The rise of robotics programmers: LLMs will lead to rapid improvements for robotics engineers. Generative AI will develop code for robots and create new simulations to test and train them.

LLMs will accelerate simulation development by automatically building 3D scenes, constructing environments and generating assets from inputs.

The resulting simulation assets will be critical for workflows like synthetic data generation, robot skills training and robotics application testing.

In addition to helping robotics engineers, transformer AI models, the engines behind LLMs, will make robots themselves smarter so that they better understand complex environments and more effectively execute a breadth of skills within them.

For the robotics industry to scale, robots have to become more generalizable – that is, they need to acquire skills more quickly or bring them to new environments.

Generative AI models – trained and tested in simulation — will be a key enabler in the drive toward more powerful, flexible and easier-to-use robots.

Leave a Reply

Your email address will not be published. Required fields are marked *