Nutanix AI deals, AI data center guidelines and more

Recent moves from the AI ​​computing giant include an expanded partnership with hybrid multi-cloud infrastructure vendor Nutanix, new reference architectures for AI data centers, and new peer-reviewed performance numbers for its upcoming Blackwell GPUs.

Nvidia hasn’t announced new GPUs in the past three months, but that doesn’t mean it wasn’t busy finding other ways to expand and solidify its influence in the rapidly growing AI computing market.
For his part, Nvidia CEO and co-founder Jensen Huang has been busy traveling the world and meeting with world leaders, recently meeting with industry giants to underline the company’s confidence about new AI supercomputer deals. What is known as “Sovereign AI”.
[Related: 10 Big Nvidia Executive Hires And Departures In 2024]
According to Nvidia, this refers to the idea that nations should build their own AI infrastructure, workforce, and business networks to advance their economies.
“Denmark recognizes that in order to innovate in AI, the most impactful technology of our time, it needs to be developed domestically,” Huang said at an October summit with Denmark’s King Frederik AI infrastructure and ecosystem have to be promoted.”
But while Huang is making stops in Denmark and other countries like Japan, India and Indonesia, his company has been making moves in other ways in recent months.
These moves include an expanded partnership with hybrid multi-cloud infrastructure vendor Nutanix, new reference architectures for AI data centers, new peer-reviewed performance numbers for its upcoming Blackwell GPUs, the appointment of a prolific Cisco Systems investor and related contributions from Blackwell Are. Open the compute project.
What follows is a summary of these and Nvidia’s other recent updates, arranged in reverse-chronological order from when they were announced.

Nvidia: Blackwell GPU double LLM performance in peer-review test

Nvidia said its upcoming B200 Blackwell GPU is twice as fast as the previous generation H100 GPU in large language model fine-tuning and pre-training.
That’s according to new peer-reviewed results in the latest MLPerf training suite of benchmark tests released Wednesday by MLCCommons.
The AI ​​computing giant said the B200 GPU is 2.2 times faster for fine-tuning with a 70-billion-parameter LAMA 2 model and twice as fast for pre-training with a 175-billion-parameter GPT-3 model compared to the H100 Is fast. Per-GPU basis.
Nvidia said the large language model (LLM) tests were conducted using an AI supercomputer called Nix that is built with the company’s DGX B200 systems, each of which come with eight 180-GB B200 GPUs.
The company also highlighted how ongoing software updates result in performance and feature improvements for its H100 GPUs, which debut in 2022 and enable “the highest performance among available solutions” in new MLPerf training tests. Did.
Among the results presented in the latest MLPerf training test suite, Nvidia said it demonstrated that the H100 achieved a 30 percent increase in training for the GPT 175B model compared to when the benchmark was initially released.
It also demonstrated the impact of software improvements in other tests.
“The scale and performance of Nvidia Hopper GPUs has more than tripled from last year on the GPT-3 175b benchmark. Additionally, on the Llama 2 70B LoRa fine-tuning benchmark, Nvidia increased performance by 26 [percent] Using the same number of Hopper GPUs, reflecting continued software enhancements,” the company said.

Nutanix expands partnership with Nvidia for new cloud AI offering

Nutanix announced Tuesday that it has expanded its partnership with Nvidia through a new cloud-native offering called Nutanix Enterprise AI.
The hybrid multi-cloud infrastructure vendor said Nutanix Enterprise AI can be deployed “on any Kubernetes platform, at the edge, in core data centers, and on public cloud services” such as Amazon Web Services’ Elastic Kubernetes Service, Microsoft Azure Google’s Managed Kubernetes Service, and Google Cloud’s Google Kubernetes Engine.
“With Nutanix Enterprise AI, we are helping our customers easily and securely run GenAI applications on-premises or in the public cloud. Nutanix Enterprise AI can run on any Kubernetes platform and allows their AI applications to run in their own secure space with predictive cost models, Thomas Cornely, senior vice president of product management at Nutanix, said in a statement.
Nutanix said the cloud-native offering can be deployed with Nvidia’s full-stack AI platform and has been validated to work with the Nvidia AI enterprise software platform, including nvidia nim microservices Which are designed to enable “secure reliable deployment of high-performance AI model inference”.
Nutanix is ​​part of the Enterprise AI company gpt-in-a-box 2.0 device, which it said is part of the Nvidia-certified system program to ensure optimal performance.

Nvidia replaces Intel in Dow index

Nvidia replaced Intel on the Dow Jones Industrial Average on Nov. 8, as the AI ​​computing giant continues to put competitive pressure on the embattled chipmaker.
S&P Dow Jones Indices, the organization behind the DJIA, said in a statement A week before that the change is meant to “ensure more representative performance in the semiconductor industry.”
While Nvidia’s share price has risen more than 200 percent since the beginning of the year, Intel’s shares have fallen nearly 40 percent over the same period.
According to S&P, “The DJIA is a value-weighted index, and thus consistently low-priced stocks have minimal impact on the index.”

Nvidia adds veteran astronaut Ellen Ochoa to board

Nvidia announced in early November that it has appointed veteran astronaut and former NASA director Ellen Ochoa to its board of directors.
The AI ​​computing giant said the addition of Ochoa, who was the first Latina astronaut in space and former director of NASA’s Johnson Space Center in Houston, brings the board to 13 members.
“Ellen’s extraordinary experience speaks volumes for her role as a pioneer and leader,” Nvidia CEO Jensen Huang said in a statement. “We look forward to joining Nvidia’s board on our continued journey to build the future of computing and AI.”

Nvidia unveils enterprise reference architecture for AI data centers

Nvidia on Oct. 29 revealed what it called an Enterprise Reference Architecture, which is intended to serve as a guideline for partners and customers building AI data centers.
The AI ​​computing giant said these guidelines are meant to help partners and customers build AI data centers faster and capture business value from AI applications, while ensuring they operate at optimal performance levels in a secure manner. Are.
Server vendors participating in the Enterprise Reference Architecture Program include Dell Technologies, Hewlett Packard Enterprise, Lenovo, and Supermicro.
The Enterprise Reference Architecture includes a comprehensive set of recommendations for accelerated infrastructure, AI-optimized networking, and nvidia ai enterprise Software Platform.
Accelerated Infrastructure Guidelines provide recommendations for GPU-accelerated servers Nvidia-certified system programsWhich ensures that such servers are optimized to deliver the best performance.
The networking guidelines explain how to “provide optimal network performance” with Nvidia’s Spectrum-X Ethernet platform and Bluefield-3 data processing units. They also provide “guidance on optimal network configuration at multiple design points to address different workload scale requirements”.
Nvidia AI Enterprise includes Nemo and NIM microservices to “easily build and deploy AI applications” as well as Nvidia Base Command Manager Essentials for “infrastructure provisioning, workload management, and resource monitoring.”

Nvidia contributed to the Blackwell Platform design for Open Compute project

Nvidia announced on October 15 that it has contributed the “foundational elements” of its Blackwell accelerated computing platform to the Open Compute Project with the goal of enabling widespread adoption in the data center market.
The Blackwell platform in question is Nvidia’s upcoming GB200 NVL72 rack-scale system that comes with 36 GB200 Grace Blackwell superchips, each of which features an Arm-based, 72-core Grace GPU paired with two B200 GPUs.
Nvidia’s contribution to the Open Compute Project (OCP) focuses on the electro-mechanical design of the GB200 NVL72. Elements contributed by Nvidia include rack architecture, compute and switch tray mechanical, liquid-cooling and thermal environment specifications, as well as NVLink cable cartridge volumetrics.
The AI ​​computing giant said it has also expanded support for OCP standards to its Spectrum-X Ethernet platform. According to the company, this will let customers use Spectrum-X’s “adaptive routing and telemetry-based congestion control” to accelerate Ethernet performance for scale-out AI infrastructure.
“Building on a decade of collaboration with OCP, NVIDIA is working with industry leaders to shape the specifications and designs that will be implemented throughout the data center,” Nvidia CEO Jensen Huang (pictured) said in a statement. can be widely adopted.” “By advancing open standards, we are helping organizations around the world harness the full potential of accelerated computing and build the AI ​​factories of the future.”

Nvidia hires top Cisco inventor amid massive networking sales

Nvidia recently hired the 25-year Cisco Systems engineering veteran, once credited as the switching giant’s most prolific inventor, to lead the development of AI and networking architecture at the AI ​​computing giant.
JP Wasseypur, a recently departed Cisco Fellow who was most recently vice president of machine learning and AI engineering for networking, announced He joined Nvidia last month as a senior distinguished engineer and chief architect of AI and networking, a LinkedIn post on October 2 said.
Vasseur’s announcement comes a little more than a month after Nvidia CFO Colette Cress said the company’s Spectrum-X line of Ethernet networking products for data centers could be a “multibillion-dollar product within a year.” The line is on its way to launch.”

Nvidia launches NIM Agent Blueprint to boost enterprise AI

Nvidia announced on August 27 the launch of “pre-trained, customizable AI workflows” that can help enterprises more easily build and deploy custom generative AI applications.
The AI ​​computing giant is calling these workflows NIM Agent Blueprints, which come with sample applications built with Nvidia Nemo, Nvidia NIM, and partner microservices, as well as reference code, customization documentation, and a Helm chart for deployment.
Channel partners selected by NVIDIA to offer NIM Agent Blueprint include Accenture, Deloitte, SoftServe and World Wide Technologies.
Nvidia revealed microservices Nvidia unveiled the addition of its AI enterprise software platform at the GTC 2024 event in March. At the time, the company said they aimed to help businesses “develop and deploy AI applications faster while retaining full ownership and control of their intellectual property.”
With the NIM Agent Blueprint, enterprises can customize generative AI applications using their own proprietary data and continuously refine them based on user feedback.
The initial set of NIM agent blueprints include workflows for digital humans in customer service, generative virtual screening in computer-assisted drug discovery, and multimodal PDF extraction in retrieval-enhanced generation.