Whilst we continue to learn more about AI – the hype vs reality, the biggest beneficiary of the AI narrative has been Nvidia, the world’s largest maker of GPUs – chips used for graphics and gaming and now expected to supply majority of the chips used for AI. Nvidia has quadrupled in market cap in the last nine months joining the exclusive club of a handful of trillion dollar market cap companies in the world, trading at 30x sales and over 200x earnings. So how did Nvidia get here? Jordan Schneider interviews Doug O’Laughlin, the author of Fabricated Knowledge, to give us a brief history of Nvidia and its iconic founder and CEO, Jensen Huang.

Nvidia’s success in the GPU market was driven by Huang’s focus on accelerated computing or parallel computing which puts it in the driver’s seat of the AI revolution as AI becomes an extended use case for parallel compute.
“The type of calculation that GPUs were meant for — the graphics processing for the highly parallel calculation of all the pixels — ends up being almost a perfect use case for the primary, heaviest part of AI computing.”

But its not just happenstance. Huang consciously went about building his dominance in GPUs.

The type of processor ends up being well-suited for gaming. This market has a need that Nvidia can fulfill in the near term, and it can make money the entire way. But Jensen definitely, clearly had his eye on the ball.

He was talking in the 2010s about accelerated computing — about how all the workloads of the world needed to be sped up. Every year he would talk about it. Everyone was like, “Oh, that’s pretty cool.” But every year we would never really see it happen. But Jensen, the entire time, was giving away the ecosystem.

Remember, code is not used to running on a graphics card. It has to be split into small pieces and then fed into the machine parallel.

There’s something called CUDA, which is software that makes code more parallel so you can put it into the GPU.

Jensen thought about this: he knew that if he gave away the software for free, it would create an ecosystem.

He started giving it away for free as much as he could to all the researchers by maybe as early as 2010. He would just give away GPUs and CUDA and make sure all the researchers were working and using GPUs. That way, they would only know how to do their problems on GPUs while optimizing their physics libraries on GPUs.

Jensen had his eye on the ball and knew he was creating an ecosystem and making his product the one to use. He gives it away for free so everyone knows how to use it. Then everyone uses it in their workflows and optimizes around it.
He does this for about a decade. Jensen the whole time looks at these problems and knows these super-massive multiplication problems are the future of big data.

I don’t think that would have been a spicy opinion in the 2010s — that matrix multiplication would be used for very large data sets and hard, complicated problems; that’s not a big leap. But pursuing that path, seeing that vision, and creating the ecosystem around it — giving away a lot of it for free — is how Jensen locked in that ecosystem ten years ago.””

O’Laughlin then talks about the rise of transformer models (a type of AI model) which also played in Nvidia’s favour. But it is the sheer lack of competition at this stage which seems to underpin the market’s bullishness on Nvidia. He talks about Nvidia as the three headed hydra:

“Nvidia has three important parts of the stack extremely locked up. One is the hardware. Another is the software with CUDA and all the optimization around it. The third is networking and systems.

…A lot of the AI hardware startup companies had a good hardware solution, but they didn’t have a good software solution.

Networking is another level. These models are becoming so large — you go and buy a $40,000 GPU, but it’s still not going to be enough to train your model. It’ll take years or months to train a model across tens of thousands of GPUs.

One issue is the interconnect problem. It’s not just how good the software and hardware are — it’s how good the hardware works together in a big system.”
But he reckons the closest competitor is Google:
“…Google has been at the forefront of AI research for a long time. They have a lot of the things Nvidia has, but they are custom, in-house, and proprietary. They don’t sell it as a solution.

TPUs are their hardware, and XLA is their software. They have their own OCI networking product with the models on top of it.

Google right now is probably the closest real competitor that has a complete vertical full stack. Nvidia doesn’t have a full vertical stack because they’re not customer-facing and they don’t make the models. They make some open-source models. They improve the entire ecosystem. But they’re essentially AI as a service, and they’re trying to sell it to people who are making the models.
Google is trying to own the entire stack — consumer to model to hardware to networking — and sell it. So far, Google has truly the most competitive, differentiated offering relative to Nvidia. But no one else has made the solutions that Nvidia does.

There’s a big difference between making a theoretical chip that can solve a model and then you have to troubleshoot it — versus Nvidia, where you could buy 10,000 GPUs and it will work out of the box. That’s a big difference.
Productization — Nvidia has done a really good job at making a product to sell to customers — that’s their big differentiator. The three-headed hydra of Nvidia is hard to compete with.”

He then talks about how sanctions have handicapped Chinese competition.

If you want to read our other published material, please visit https://marcellus.in/blog/

Note: the above material is neither investment research, nor financial advice. Marcellus does not seek payment for or business from this publication in any shape or form. Marcellus Investment Managers is regulated by the Securities and Exchange Board of India as a provider of Portfolio Management Services. Marcellus Investment Managers is also regulated in the United States as an Investment Advisor.

Copyright © 2022 Marcellus Investment Managers Pvt Ltd, All rights reserved.



2024 © | All rights reserved.

Privacy Policy | Terms and Conditions