Last week, I tested Nvidia`s Geforce Now, a streaming service that lets you play games through a remote GPU (graphic accelerator cards) that streams full 4k ultra-high detailed games back into your living room or wherever you have a +40 Mbit internet connection. Geforce Now already has an impressive 25 million subscribers, and 20 million of those are paying for the premium or the ultimate tier, so it's not a small business.
The service itself is awe-inspiring from a technical point of view. But if you look at it from a business model point of view, it is possibly the most beautiful business model move I have ever seen. Let me explain.
The traditional GPU business model
The way the business model works in the traditional GPU market is that you manufacture a GPU, you sell that GPU, a customer installs that GPU and it stands unused for a lot of its lifespan. The business model upside was limited to the margin on that one card.
Overhead (R&D and admin), manufacturing, and distribution cost per GPU: 400$. Wholesale price per GPU: 600$. Bottom line contribution per GPU manufactured: 200$
The beautiful 65X(?) GPU business model of tomorrow.
Today, Nvidia can manufacture a GPU, put it in its data center, connect it to Geforce Now, and give several users of Geforce Now access to that one specific GPU.
If we assume that one GPU in Nvidia`s data center during one month can serve 30 different Geforce Now users playing on different days, at different times, and in different time zones, the “back of the napkin calculation” with some assumptions would look like this:
Annual GPU overhead: Annual power consumption and electricity cost per GPU: Nvidia GeForce RTX 4080, with a power consumption of 320 watts under 100% load 24/7, gives approximately 2803.2 kilowatt-hours (kWh) annually. Average European electricity prices for residential users are around €0.20 to €0.25 per kWh. Using €0.22 per kWh, the annual cost would be approximately €616.704.
Other relevant annual data center overhead: Networking, CPU, memory, and streaming cost annually per GPU: $400
Then the math would look something like this:
Year one: Overhead (R&D and admin) and manufacturing cost per GPU: 350$. Revenue per GPU (Ultimate tier): (20$*30 users)*12= 7.200$. Assumed annual GPU and data center overhead per GPU: 1016$ Bottom line contribution per GPU manufactured year one: 5866$
Year two: Manufacturing cost per GPU: 0$. Revenue per GPU year two (Premium tier): (10$*30 users)*12= 4.600$. Assumed annual GPU and data center overhead per GPU year two: 1016$ Bottom line contribution per GPU manufactured year two: 3.616$
Year three: Manufacturing cost per GPU: 0$. Revenue per GPU year two (Premium tier): (10$*30 users)*12= 4.600$. Assumed annual GPU and data center overhead per GPU year two: 2000$ Bottom line contribution per GPU manufactured year three: 3.616$
Total three-year bottom line contribution per GPU manufactured and put into a Geforce Now data center: 13.098$(!) 65X what you would make in the traditional model.
There has to be something here that I do not see.
Even if halved, these numbers are too good to be true. What am I missing? Is my math entirely off? I assume it has to be my assumptions.
No downside for Nvidia?
Since Nvidia is catering to both business models today, they can control the transition, meaning they do not have to bet their whole business that everyone will be streaming their games and when exactly that will happen. They can move slowly and comfortably from one model to another while adjusting their manufacturing accordingly. A bit like Netflix`s move from DVD subscription to streaming.