NVIDIA Outlines Vision for Accelerated Computing, AI, Omniverse, Avatars, and Robots in Keynote.
NVIDIA founder and CEO Jensen Huang announced technology to alter multitrillion-dollar sectors on Tuesday, bringing simulation of real and virtual worlds to everything from self-driving cars to avatars, robots, and climate modeling.
Huang gave a speech at NVIDIA’s virtual GTC, where he announced NVIDIA Omniverse Avatar and NVIDIA Omniverse Replicator, among a slew of other announcements, demos, and far-reaching plans.
Huang demonstrated how NVIDIA’s technologies are brought together by NVIDIA Omniverse, the company’s virtual world simulation and collaboration platform for 3D workflows.
With Omniverse Avatar, he demonstrated Project Tokkio for customer service and Project Maxine for video conferencing.
“A constant theme you’ll see — how Omniverse is used to simulate digital twins of warehouses, plants and factories, of physical and biological systems, the 5G edge, robots, self-driving cars, and even avatars,” Huang said.
NVIDIA will construct a synthetic doppelganger named E-2, or Earth-Two, to model and predict climate change, according to Huang.
[email-subscribers-form id=”1″]
‘Full Stack, Data Center Scale, and Open Platform’
Modern AI was born by accelerated computing, and the waves it created are now sweeping through research and the world’s industries, according to Huang.
He explained that it all starts with three chips: the GPU, CPU, and DPU, as well as systems ranging from cloud to edge called DGX, HGX, EGX, RTX, and AGX.
From graphics and AI to sciences and robotics, NVIDIA has produced 150 acceleration libraries for 3 million developers.
“NVIDIA accelerated computing is a full-stack, data-center-scale and open platform,” Huang said.
Huang announced NVIDIA Quantum-2, which he described as “the most advanced networking platform ever constructed,” and the BlueField-3 DPU, which he said welcomed cloud-native supercomputing.
According to him, Quantum-2 provides cloud computing providers and supercomputing centers with high performance, broad accessibility, and solid security.
Cybersecurity is a major concern for both businesses and governments, and Huang has proposed a three-pillar zero-trust structure to address the problem.
Applications are separated from infrastructure by BlueField DPUs. The latest BlueField SDK, NVIDIA DOCA 1.2, supports next-generation distributed firewalls. Assuming the burglar is already inside, NVIDIA Morpheus employs the “superpowers of accelerated computing and deep learning to detect intruder actions,” according to Huang.
Omniverse Avatar and Omniverse Replicator
“We now have the technology to create new 3D worlds or model our physical world,,” Huang remarked, referring to Omniverse. –
NVIDIA unveiled Omniverse Avatar to let developers create interactive characters with Omniverse that can see, speak, converse on a wide range of topics, and interpret genuinely stated intent.
Huang demonstrated how Project Maxine for Omniverse Avatar combines computer vision, Riva voice AI, and avatar animation and aesthetics to create the Toy Jensen Omniverse Avatar, a real-time conversational AI robot.
He also demonstrated Project Tokkio, a customer-service avatar that can see, communicate with, and understand two people in a restaurant kiosk.
Huang also demonstrated Project Maxine’s capacity to integrate cutting-edge video and audio capabilities into virtual collaboration and content creation apps.
A demonstration showed a woman on a video chat speaking English in a noisy cafe, yet she could be heard clearly without the background noise. Her remarks are recorded and translated into French, German, and Spanish in real-time as she talks. They are also spoken by an avatar that can converse with her own voice and accent, owing to Omniverse.
NVIDIA launched Omniverse Replicator, a synthetic-data-generation engine for training deep neural networks, to assist developers in creating the massive volumes of data required to train AI.
NVIDIA has created two replicators: the Omniverse Replicator for Isaac Sim for general robotics and the Omniverse Replicator for DRIVE Sim for autonomous automobiles.
Omniverse has been downloaded 70,000 times by designers at Fortune 500 businesses since its launch late last year.
Omniverse Enterprise is currently available for a one-year fee of $9,000 USD.
AI Models and Systems
To train big language models, Huang created Nemo Megatron. According to him, such models “will be the biggest mainstream HPC application ever.”
With the Deep Graph Library or DGL, a new Python package, graphs, a major data format in current data science, may now be projected into deep-neural network frameworks.
NVIDIA Modulus, which was announced on Tuesday, creates and trains physics-informed machine learning models capable of learning and obeying physical principles.
Forest models and multi-GPU multi-Node inference for big language models have been added to Triton, an inference server for all workloads.
And Huang introduced three new libraries.
- ReOpt – for the $10 trillion logistics industry.
- cuQuantum – to accelerate quantum computing research.
- cuNumeric – to accelerate NumPy for scientists, data scientists and machine learning and AI researchers in the Python community.
Huang unveiled NVIDIA Launchpad to aid in the delivery of services based on NVIDIA’s AI technology to the edge.
NVIDIA is teaming up with Equinix, the world’s largest data center company, to pre-install and integrate NVIDIA AI into data centers around the world.
Robotics
Over 700 organizations and partners are now part of NVIDIA’s Isaac ecosystem, which has grown fivefold in the previous four years.
The NVIDIA Isaac robotics platform, according to Huang, can now be readily incorporated into the Robot Operating System, or ROS, a popular set of software libraries and tools for robot applications.
According to Huang, the most realistic robotics simulator ever produced is Isaac Sim, which was built on Omniverse.
“The goal is for the robot to not know whether it is inside a simulation or the real world” Huang explained.
Isaac Sim Replicator can generate synthetic data to teach robots to help in this process.
Huang said that Replicator replicates sensors, provides data that is automatically tagged and builds rich and diverse training data sets using a domain randomization engine.
Accelerated Computing, AI, Omniverse, Avatars and Robots
Autonomous Vehicles
Everything that moves will be autonomous—fully or mostly autonomous, Huang said. “By 2024, the vast majority of new EVs will have substantial AV capability,” he added.
NVIDIA DRIVE is the company’s full-stack and open platform for autonomous vehicles, whereas Hyperion 8 is the company’s most recent hardware and software architecture.
It has 12 cameras, nine radars, 12 ultrasonics, and one front-facing lidar as part of its sensor suite. Two NVIDIA Orin SoCs handle everything.
Huang demonstrated numerous new Hyperion features, including the Omniverse Replicator for DRIVE Sim, an Omniverse-based synthetic data generator for autonomous vehicles.
Hyperion 8 sensors, 4d perception, deep learning-based multisensor fusion, feature tracking, and a new planning engine are now available from NVIDIA.
The interior of the car will be transformed as well. NVIDIA Maxine’s technology will revolutionize how we interact with our automobiles.
“With Maxine, your car will become a concierge,” Huang said.
Earth Two
NVIDIA will construct Earth-Two, or E-2, to mimic and predict climate change, according to Huang. “All the technologies we’ve invented up to this moment are needed to make Earth-Two possible,” Huang said.
Watch the Replay of the Keynote below: