‘We’re in a race to increase supply to meet incredible’ Blackwell demand

Nvidia finance chief Colette Kress said in her third-quarter remarks before CEO Jensen Huang addressed the issues with Blackwell, “There are some supply constraints in both the Hopper and Blackwell systems, and demand for Blackwell will remain the same for several quarters into fiscal 2026.” Expected to exceed supply by 2020.

Nvidia CFO Colette Cress said the company is “racing to deliver at scale to meet the incredible demand” for its recently launched Blackwell GPUs and related systems, which are designed to improve the performance and efficiency of Generator AI. It has been said to take a big leap.
Cress made the comments during the AI ​​computing giant’s Wednesday earnings call, where the company revealed that it third quarter revenue Nearly doubled to $35.1 billion, mainly due to continued high demand for its data center chips and systems.
[Related: Nvidia Reveals 4-GPU GB200 NVL4 Superchip, Releases H200 NVL Module]
In prepared remarks at the latest earnings, Kress said that Nvidia is about to start shipping blackwell productsNow in full production as of January, it plans to increase those shipments in its 2026 fiscal year, in line with the following months.
“Both the Hopper and Blackwell systems have some supply constraints, and demand for Blackwell is expected to exceed supply for several quarters into fiscal 2026,” they wrote.
On the call, Kress said Nvidia is “on track to exceed” its previous Blackwell revenue estimate of several billion dollars for the fourth quarter, which will end in late January, as its “visibility into supply continues to increase.” ”
“Blackwell is a customizable AI infrastructure with seven different types of Nvidia-made chips, multiple networking options, and air- and liquid-cooled data centers,” she said. “Our current focus is on meeting strong demand, increasing system availability and providing our customers with the optimal mix of configurations.”

Jensen Huang addresses Blackwell issues, road map execution

During a question-and-answer session of the call, a financial analyst asked Nvidia CEO Jensen Huang to address Sunday report By industry publication The Information, which details some customer concerns about overheating of Blackwell GPUs in the most powerful configuration of its Grace Blackwell GB200 NVL72 rack-scale server platform.
The GB200 NVL72 is expected to serve as the basis for upcoming major Nvidia AI offerings from major OEMs and cloud computing partners, including Dell Technologies, Amazon Web Services, Microsoft Azure, Google Cloud, and Oracle Cloud.
In response to a question about The Information’s report on overheating of the Blackwell GPUs in the GB200 NVL72 system, Huang said that while Blackwell’s production is “all in” with the company exceeding previous revenue projections, he added that engineering What Nvidia does with OEMs and cloud computing partners is “rather complicated.”
“The reason is that although we build the full stack and the full infrastructure, we separate all the AI ​​supercomputers and we integrate them into all the custom data centers and architectures around the world,” he said on the earnings call. Said.
“That integration process is something we have done for generations. We’re pretty good at it, but there’s still a lot of engineering to be done at this point,” Huang said.
Huang noted that Dell Technologies announced Sunday that it has begun shipping its GB200 NVL72 server racks to customers, including emerging GPU cloud service provider CoreWeave. He also mentioned the Blackwell system that is being built by Oracle Cloud Infrastructure, Microsoft Azure and Google Cloud.
“But as you can see from all the systems that are being put in place, Blackwell is in very good shape,” he said. “And as we mentioned earlier, the supply and turnaround we plan to make this quarter exceeds our previous estimates.”
Addressing a question about Nvidia’s ability to execute on its data center road map, which moved to an annual release cadence for GPUs and other chips last yearHuang remained steadfast in his commitment to the accelerated production plan.
“We are on an annual roadmap, and we are looking forward to continuing to execute on that annual roadmap. And by doing so we increase the performance of our platform. But it’s also really important to understand that when we’re able to increase performance and do so at X factors at a time, we’re reducing the cost of training, we’re reducing the cost of inference, and we’re reducing “We are reducing the cost of AI so that it can be more accessible,” he said.

Partner’s opinion on Nvidia growth, investors’ reaction

After Nvidia released its third-quarter earnings, investors reacted, sending the company’s share price down more than 1 percent in after-hours trading.
While the company beat Wall Street expectations on revenue by nearly $2 billion and beat the average analyst estimate for earnings per share by 6 cents, its fourth-quarter revenue estimate came in just slightly higher than Wall Street expected. Was.
Andy Lin, CTO Top Nvidia Partners Mark III Systems in Houston, Texas, told CRN that while demand for Nvidia’s data center GPUs and related systems is “obviously incredibly strong,” it has set a “such a high level” for itself, with triple-digit growth in multiple quarters. have set.
“These numbers are still surprising, especially on a year-over-year comparison. But this is clearly a smaller increase than before,” he said.
However, Lin said, some customers are holding off on making any purchases right now because Nvidia is switching from Hopper-based systems to Blackwell-based systems.
“There are certainly a number of organizations that we look at that certainly haven’t spent [money on new infrastructure and are waiting] On Blackwell. So the question is, how many of them, at what scale, and what will that look like? And I think that might be something that the market is probably underestimating a little bit,” he said.