CoreWeave: The AI Cloud Bet with Hidden Risks
CoreWeave just filed its S-1, officially preparing to go public (maybe) . If you’re paying attention to the AI arms race, you should care.
This isn’t just another cloud computing company. CoreWeave is an infrastructure player betting that the AI boom is not just hype: that AI compute is the new oil, and whoever controls the pipelines will own the future.
But what does CoreWeave actually do? What do they spend money on? And does their business model make sense?
Let’s break it down in two ways:
The real answer (for those who want a serious breakdown)
The 15-year-old answer (a relatable analogy)
The Real Answer: CoreWeave is Selling AI Pickaxes in a Gold Rush
CoreWeave is a cloud infrastructure provider built specifically for AI workloads. Most cloud providers (AWS, Google Cloud, Microsoft Azure) are generalists, offering compute for everything from hosting your grandma’s WordPress blog to running banking software.
CoreWeave is a specialist. They don’t waste time on storage, networking, or consumer SaaS. They do one thing: sell GPU compute to AI companies.
How CoreWeave Makes Money
They buy NVIDIA’s most powerful GPUs: the H100s, A100s, and whatever Jensen Huang cooks up next.
They build GPU-first data centers, optimizing everything for AI training and inference.
They rent out that GPU power to AI startups, hedge funds, and enterprises who need to train massive models but don’t want to buy their own chips.
Think of it like AWS, but for AI compute only.
What’s Under the Hood?
Revenue Growth: $1.92 billion in 2024, up from just $228.9 million in 2023. That’s 8.4x growth in one year—insane.
GPUs Owned: 250,000+
Data Centers: 32, up from 10 a year ago. (these are leased!)
Where They Spend Money
CoreWeave is in blitzscale mode, and it shows:
Data Centers – They don’t just rent GPUs; they own and operate the entire stack. Expanding from 10 to 32 locations in a year requires billions in capex.
Buying NVIDIA GPUs – If you want the best AI infrastructure, you need real estate in NVIDIA’s supply chain. CoreWeave is doing exactly that.
Debt Financing – In 2024, they raised over $7 billion in private debt from Blackstone, Magnetar, and others. That’s unprecedented for a cloud startup.
This is a "go big or die trying" strategy. It only works if demand for AI compute keeps outpacing supply. (We will get into the risks below)
The 15-Year-Old Answer: CoreWeave is an Exclusive Gym for AI Models
Imagine you’re a powerlifter training for the biggest competition of your life. You need access to the heaviest weights, the best equipment, and zero distractions.
AWS is Planet Fitness: cheap, accessible, but packed with casual gym-goers. You have to wait for a squat rack, and the dumbbells only go up to 50 lbs.
CoreWeave is a private powerlifting gym (a nicer equinox) : expensive, exclusive, and designed only for elite lifters. There are no treadmills, no yoga classes: just heavy iron and chalk.
In AI terms, the lifters are AI models, and the weights are GPUs. CoreWeave gives AI companies the best possible training environment without distractions.
And just like serious lifters don’t train at Planet Fitness, serious AI companies don’t want to share GPUs with enterprise SaaS workloads on AWS.
Risk Factors:
Why AI Companies Might Start Buying Their Own Chips
Right now, renting from CoreWeave makes sense because:
GPUs are hard to get: NVIDIA is backlogged for months.
Data centers are expensive: You need real estate, cooling, networking.
Flexibility is valuable: You don’t need to commit billions in capex.
But if an AI company is big enough, renting becomes inefficient.
Look at OpenAI, Anthropic, Meta, Google DeepMind, and xAI. They’re all:
Buying their own GPUs (massive orders of H100s and B200s).
Building their own AI-optimized data centers (Google’s TPU farms, Meta’s AI superclusters).
Partnering directly with NVIDIA instead of going through middlemen.
For them, renting is a tax on growth. If they expect to spend billions on compute every year, owning makes more sense than renting.
What This Means for CoreWeave
If more AI companies start buying their own chips, CoreWeave’s market shrinks.
The biggest customers will leave. If OpenAI, Anthropic, and Mistral all build their own infrastructure, that’s a huge chunk of demand gone.
Their pricing power collapses. Right now, CoreWeave makes money because GPUs are scarce. If companies can get GPUs directly, CoreWeave loses leverage.
They still have to pay for all those data centers. Unlike AWS, they’re not diversified. They can’t just pivot to hosting SaaS apps.
This is exactly what happened to crypto mining farms when GPUs were scarce, mining-as-a-service companies made a killing. When supply caught up, their margins collapsed overnight.
The Counterargument: Not Every AI Company Can Do This
Not everyone is OpenAI or Google. For 99% of AI startups, owning GPUs still makes no sense.
Buying thousands of GPUs requires insane capital.
Running a data center isn’t their core competency.
Many startups don’t know how much compute they’ll need—renting is safer.
In other words, CoreWeave is betting on the long tail—that smaller AI companies will always prefer renting.
This is the AWS playbook:
Before AWS, companies bought their own servers.
AWS made cloud cheap, scalable, and easy.
Now, almost no one builds their own data centers unless they’re massive.
CoreWeave wants to be AWS for AI compute.
The Real Question: Does CoreWeave Have a Moat?
Right now, CoreWeave doesn’t manufacture the chips: NVIDIA does. They’re a middleman.
If NVIDIA decides to sell directly to AI companies, CoreWeave loses.
If AWS/Google lower GPU prices, CoreWeave loses.
If AI companies start building their own infra, CoreWeave loses.
Their only defense? Speed and specialization.
They have first-mover advantage in AI cloud, and they can outmaneuver hyperscalers who are still optimizing for general workloads. But if hyperscalers prioritize AI? If AI demand normalizes? CoreWeave is in trouble.
What’s the Endgame?
CoreWeave IPOs.
They use the money to expand even faster.
They either get big enough to survive—or get acquired.
In the long run, AI infrastructure will either consolidate into a few dominant hyperscalers—AWS, Google, Microsoft—or NVIDIA will own the stack from chips to cloud.
CoreWeave is betting there’s room for a third player.
The question is: Are they right?