Nvidia CEO Jensen Huang reinforces the company’s role in the ‘evolving’ AI trade

Nvidia (NVDA) CEO Jensen Huang took to the company’s fourth quarter earnings call on Wednesday, looking to reaffirm the chip giant’s place in the AI trade — and calm Wall Street jitters around the technology’s future growth.

Shares of Nvidia have been off more than 7% on the year going into Wednesday’s print as investors and analysts have raised questions about Big Tech’s continued AI spending.

The fears: that the rise of DeepSeek’s AI models meant developers didn’t need to use pricey chips like Nvidia’s Blackwell GPUs — and that custom chips developed by Nvidia customers like Amazon (AMZN) and Google (GOOG, GOOGL) could threaten the company’s long-term health.

Huang held off on making opening remarks during Nvidia’s call. Instead, he answered analysts’ questions throughout and closed with comments explaining how models like DeepSeek’s will require even more power than earlier models.

When DeepSeek debuted its R-1 model in January, it sent AI stocks into a tailspin.

That’s because the company says it developed the software, which rivals OpenAI’s platform, using Nvidia’s H20 chips. Those processors are far less powerful than the AI titan’s Blackwell chips, leading investors to question whether Nvidia was facing an existential crisis. After all, if companies could create AI platforms using more affordable chips, why would they need to spend billions on Nvidia’s high-end processors?

Huang, however, explained that because DeepSeek’s model, and others like it, provided better responses when using more powerful AI chips, Nvidia will continue to benefit from their use.

“The more the model thinks the smarter the answer. Models like OpenAI, Grok 3, DeepSeek-R1 are reasoning models that apply inference time scaling,” Huang said. “Reasoning models can consume 100x more compute. Future reasoning models can consume much more compute.”

Huang also explained that models like DeepSeek are driving demand for inferencing, the process of running AI applications. Training AI models requires a massive amount of power and performance. But as inferencing becomes the main use case for AI systems, Wall Street investors have questioned whether Nvidia’s customers will opt for cheaper, less powerful chips.

Huang, however, contends that DeepSeek’s models, and those like them, illustrate that inferencing will require plenty of power in their own right.

In addition to addressing DeepSeek, Huang also hit on the impact of ASICs on the industry — and what it could mean for Nvidia. ASICs or application-specific integrated circuits, are chips designed specifically for, as you might guess, specific applications. Google used its Tensor Processing Unit, which is an ASIC, to train its Gemini AI platform.

In Huang’s estimation, Nvidia’s chips offer 2x to 8x better performance per watt than ASICs and can be used across various AI applications because they’re built for general-purpose computing and have a large software ecosystem that customers can rely on.

“There’s a lot of different reasons why we do well, why we win,” Huang said.

What’s more, the CEO said that just because an ASIC is built doesn’t mean it will be deployed.

“There are a lot of chips that get built, but when the time comes, a business decision has to be made, and that business decision is about deploying a new engine, a new processor into a limited AI factory in size, in power,” he explained.

LAS VEGAS, NV - Jensen Huang speaking at NVIDIA Keynote at Michelob Ultra Arena in Las Vegas, NV, on January 6, 2025. Credit: DeeCee Carter/MediaPunch /IPX
No worries?Jensen Huang speaking at NVIDIA Keynote at Michelob Ultra Arena in Las Vegas, NV, on January 6, 2025. Credit: DeeCee Carter/MediaPunch /IPX · DeeCee Carter/MediaPunch/MediaPunch/IPx

Beyond ASICs, Huang hit on the larger issue of whether Nvidia can grow beyond its heavy reliance on cloud service providers. Currently, CSPs make up 50% of Nvidia’s data center sales. And if those customers continue to develop their own ASICs and end up relying on them, Nvidia risks losing a key source of revenue.

But Huang said the company has far more options in front of it.

“We’ve really only tapped consumer AI and search and some amount of consumer generative AI, advertising, recommenders … the next wave is coming,” he explained.

“Agentic AI for enterprise, physical AI for robotics, and sovereign AI … each one of these are barely off the ground, and we can see them. We can see great activity happening in all these different places and these will happen,” Huang added.

The upshot: Nvidia sees more use cases for its chips beyond being power plants for Amazon, Google, Meta, and Microsoft.

Up next, the company will host its GTC conference on March 18, where it’s expected to launch its Blackwell Ultra chip and provide details about its next-generation Vera Rubin processor.

Sign up for Yahoo Finance's Week in Tech newsletter.
Sign up for Yahoo Finance’s Week in Tech newsletter. · yahoofinance

Email Daniel Howley at dhowley@yahoofinance.com. Follow him on Twitter at @DanielHowley.

Click here for the latest technology news that will impact the stock market

Read the latest financial and business news from Yahoo Finance

EMEA Tribune is not involved in this news article, it is taken from our partners and or from the News Agencies. Copyright and Credit go to the News Agencies, email news@emeatribune.com Follow our WhatsApp verified Channel210520-twitter-verified-cs-70cdee.jpg (1500×750)

Support Independent Journalism with a donation (Paypal, BTC, USDT, ETH)
WhatsApp channel DJ Kamal Mustafa