Nvidia conference shows it’s still on track for AI dominance

In what seems like just one in a series of critically important moments for the most followed and scrutinized tech company in the world, Nvidia hosted developers, partners, and analysts this week in San Jose for its annual GTC event. This is Nvidia’s preeminent showcase of everything AI; at least it has become that as it shifted its focus from gaming and graphics, to compute and artificial intelligence.

The big question attached to this event, and to Nvidia itself for the foreseeable future, is can it keep riding this wave of AI growth and unprecedented leadership during the largest computing revolution in 30 years?

During the opening keynote, a relaxed and confidence CEO, Jensen Huang, announced the next generation of its AI processors, codenamed Blackwell, called B100, B200, and GB200. This follows the currently shipping products, called Hopper (H100), and supplants it as the highest performance chip for AI processing.

The Blackwell AI chips have some key architectural differences from the current generation that keep Nvidia in the driver’s seat. The new parts offer anywhere from 2.5x to 5x better performance across a range of workloads from training newer, larger AI models to the inference compute necessary for deploying those AI models to the market. Nvidia did make even more bold claims like a 30x increase in AI inference performance, but that comes with several caveats about the quantity of chips used in the server and utilizing a new data format that the H100 doesn’t support. Still, there is no getting around the fact that the B200 is going be a big step forward in raw AI performance compared to what is shipping today.

The new chip offers a much larger memory capacity, up to 192GB compared to the current 80GB, which means it can integrate bigger data sets for its training and inference models, allowing developers like OpenAI the build new capabilities even faster.

Huang also showcased more than half a dozen other processors and networking components that help Nvidia validate and deploy its AI solutions to enterprise customers as a complete solution, rather than just one piece of a very complex puzzle. This is a primary reason for Nvidia’s leadership position in the AI compute market in that it doesn’t just make a GPU, it also has built or acquired all the surrounding hardware components for networking, switches, data processors, cooling, and power delivery. While competitors and startups in the field might look to overtake Nvidia in on any one of these components, Nvidia makes the whole hardware (and software) infrastructure simple to buy and deploy.

One slight shift in the language used around the Blackwell chip announcements was additional emphasis on the inference side of AI compute. While the last 1-2 years were all about training and the extremely high amount of compute power to create new AI models like those used by Microsoft and ChatGPT, more of the story going forward is going to be around inference, or the kind of compute needed to execute those trained models on new data, customer input, etc. An area of risk for Nvidia that is often discussed is the possibility that its advantages wither away when you move from training to inference.

To combat that, Nvidia showed new performance results for the B200 chip that focused on inference, or as Huang has started referring to it, “generation.” The idea is that AI inference calculations are just creating “tokens” that are turned into whatever output the AI is tasked to do: words and sentences for a large language model, pictures and animations for diffusion models. The hope is that calling it “generation” rather than “inference” can help change the narrative and highlight the strong performance for Nvidia chips in this segment of AI too.

Another critical announcement during the event was what Nvidia calls NIMs, or Nvidia inference microservices. As the name implies, these are services the company offers, rather than just products. The target users are enterprises and software developers that want to deploy AI using prebuilt software components that are easy to implement, but still customizable, without dealing with hardware or system design. Nvidia says it has models and solutions ready for implementation of AI to be used for language models, image creation, drug discovery, medical imaging, and more. And at the Game Developers Conference in San Francisco this same week, Nvidia teams were showcasing microservices for functions around gaming like AI characters and animation modeling.

This puts Nvidia in the position to sell more recurring, continuous revenue products, rather than just one-and-done hardware sales. And it is likely a sign of things to come for how Nvidia can transition even further down the path to a full AI solutions provider to customers.

Nvidia didn’t have any shortage of partners to talk about or highlight on stage. We saw walls of logos pop up throughout the event, and Huang even commented that more than $100T of the world’s industry was represented by the presenters at GTC. Specific call outs came for Synopsys, Cadence, Google, Amazon AWS, Microsoft, and even Dell, with a spotlight showing Michael Dell in the crowd and a joking mention from Huang that Dell was ready to take orders for Nvidia AI systems today.

With the new B200 and associated chips due out “late this year” according to the company’s CFO, the question of whether Nvidia did enough this week to maintain its trajectory as the unrivaled leader in the AI space remains. I believe that the Blackwell products, despite being much more expensive to produce than the current shipping chips, pushes the company further ahead of anyone else on the market, including Intel and AMD.

Maybe more important is the clear alignment of the entire AI ecosystem around Nvidia and its direction. The opening keynote for Huang was attended by tens of thousands of developers and AI technologists, in what looked more like a rock concert atmosphere than tech conference speaking engagement. And that leadership position makes Nvidia’s moat, which helps keep competitors at bay, in both hardware and software on enterprise AI even more of a long term advantage for this tech giant.