Tales of the Tape #9: The $1.4 Trillion Bet That Could Reshape the World
Before diving in, as I know this can be quite a contentious topic… I am not saying AI is good/bad, this is just a recap of a podcast I found thought-provoking, and leave it to you to make up your opinion about it :)
Please enjoy!
Last week, Brad Gerstner sat down with Sam Altman and Satya Nadella just after they announced their partnership restructure. What started as a $1 billion bet in 2019 is now a $130 billion equity stake and $1.4 trillion in compute commitments.
When Gerstner questioned how a company with $13 billion in revenue justifies $1.4 trillion in spending, Sam’s response went viral: “Brad, if you want to sell your shares, I’ll find you a buyer.” Perfect encapsulation of the current divide: critics see circular revenue games and unsustainable spending; believers see a company that keeps beating projections.
TLDR: What You Need to Know
The Partnership Restructure:
Microsoft: $13.5B invested, 27% ownership, revenue share through 2032
OpenAI: $130B nonprofit created, $250B Azure commitment, leading models Azure-exclusive until AGI
Both exclusivity and revenue share end if AGI is verified by an independent panel
Key Reveals:
OpenAI has $1.4 trillion in compute commitments despite $13B in revenue
Microsoft has $400B in backlog they can’t fulfill (supply constrained, not demand constrained)
Azure could have grown 41-42% instead of 39% with more compute
Unit economics for AI inference still unproven at scale
The Real Story:
Enterprise AI is already profitable (GitHub Copilot, M365 Copilot)
Consumer AI is still searching for a sustainable business model
Productivity gains are real, but create a “golden age of margin expansion,” not job growth
50-state regulatory patchwork about to hit (federal preemption failed)
$4 trillion US CapEx over 5 years = 10x Manhattan Project scale
The Bet:
Both companies are betting their assumptions hold. If right: first $10T company. If wrong: historic capital misallocation. We’ll know which in 18-24 months.
The Deal Structure: Simpler Than It Sounds
Microsoft’s Investment:
$13.5 billion invested since 2019
27% ownership stake on a fully diluted basis
Revenue share on all OpenAI revenues through 2032 (estimated 15%)
Exclusive rights to OpenAI’s “stateless APIs” on Azure until 2032
Royalty-free access to OpenAI’s IP for seven years
OpenAI’s Obligations:
Leading models (ChatGPT, GPT-6, etc.) stay Azure-exclusive until AGI or 2032
Everything else (open source models, Sora, agents, consumer devices) can go anywhere
$250 billion Azure compute commitment over time
Pay that revenue share to Microsoft on all revenues
Both the exclusivity and the revenue share end early if AGI gets verified by an independent panel.
This makes the definition of AGI suddenly very important, which is fascinating given how hand-wavy everyone usually is about it.
When Gerstner pushed on whether they’d need to call in the jury soon, Altman deflected smoothly: “We will continue to be good partners and figure out what makes sense.”. Which means, according to me, “we’ve built in flexibility because nobody actually knows how this will play out”.
Nadella was more direct: “Intelligence capability-wise is going to continue to improve. Our real goal is putting that in the hands of people and organisations so they can get the maximum benefits.” In other words, the technical definition matters less than the business reality. But again, this can also be PR-speak etc.
The $130 Billion Nonprofit: the Elephant in the Room
OpenAI just created one of the largest nonprofits in the world: $130 billion in OpenAI stock, capitalised from day one. The California Attorney General signed off without objection.
The first $25 billion goes to health and AI security/resilience. Altman’s reasoning: “There are some areas where market forces don’t quite work for what’s in the best interest of people.” AI-automated scientific discovery, cyber defence, air safety research, and economic studies.
The structure lets the nonprofit grow in value as OpenAI succeeds, while the PBC below can still raise growth capital. Whether this accomplishes anything meaningful or becomes another well-funded nonprofit that does little remains to be seen.
The $1.4 Trillion Question
Sam’s response to concerns about OpenAI’s spending commitments was revealing in what it said about both his confidence and the market’s misunderstanding:
“We do plan for revenue to grow steeply. Revenue is growing steeply. We are taking a forward bet that it’s going to continue to grow, and that not only will ChatGPT keep growing, but we will be able to become one of the important AI clouds.”
Then he threw down the gauntlet: “There are not many times that I want to be a public company, but one of the rare times it’s appealing is when those people are writing these ridiculous ‘OpenAI’s about to go out of business’ takes. I would love to tell them they could just short the stock.”
Nadella backed him up with facts: “There has not been a single business plan that I’ve seen from OpenAI that they have put in and not beaten it.” This is notable coming from someone who’s seen countless over-promised, under-delivered tech plans - but again, let’s note that he can’t say otherwise in this setup.
Even if OpenAI hits every revenue projection, the unit economics of AI inference are fundamentally different from previous tech cycles. Every query burns GPU cycles. Unlike Google’s search index (a fixed cost amortised across billions of queries), each ChatGPT interaction has real marginal cost.
The bet isn’t just that demand will grow. It’s that they’ll figure out how to make it profitable at scale.
Note 1: Since this episode was released, OpenAI signed a new deal with AWS.
Note 2: I highly recommend these 2 videos from How Money Works to understand the OpenAI deals a bit more. Yes, clickbaity titles, but I promise it’s well-spent 12mins :).
Compute Constraints: The Real Bottleneck
Both Microsoft and OpenAI are compute-constrained right now. Not “we’d like more compute” constrained. Actually, unable to serve the demand.
Greg Brockman said it plainly on CNBC: “If we could 10x our compute, we might not have 10x more revenue, but we’d certainly have a lot more revenue.”
Nadella confirmed Microsoft’s Azure would have grown faster than 39% if it had more capacity. The bottleneck isn’t demand. It’s supply.
But the constraint isn’t chips anymore. It’s power and infrastructure. Nadella: “The biggest issue we are now having is not a compute glut, but it’s power and the ability to get the builds done fast enough close to power. I may actually have a bunch of chips sitting in inventory that I can’t plug in.”
The $1.4 trillion in compute commitments isn’t reckless spending. It’s securing capacity to meet existing demand while betting demand continues growing as costs per token decrease.
The Jevons paradox is in full effect: as intelligence gets cheaper, usage explodes. OpenAI’s seen it happen with every price reduction. The question isn’t whether there’s demand at current prices. It’s what happens as they push prices down 40x year over year.
The $400 Billion Proof Point
Microsoft has $400 billion in remaining performance obligations. That’s booked business they can’t yet fulfill because they don’t have the capacity.
Nadella: “That $400 billion has a very short duration, as Amy (Note: the CFO of MSFT) explained. It’s the two-year duration on average. That’s definitely our intent. That’s one of the reasons why we’re spending the capital outlay with high certainty that we just need to clear the backlog.”
This isn’t speculative demand or vendor financing games. These are signed contracts with customers willing to pay for compute they can’t access yet. The backlog is diversified across both first-party (Microsoft’s own products) and third-party customers.
And it doesn’t include OpenAI’s $250 billion commitment, which has a longer duration and will build accordingly.
The criticism about circular revenues (Note: see the videos I mentioned) misses this fundamental point: Microsoft is supply-constrained, not demand-constrained. Nadella was explicit: “We are shaping the demand here. We are not demand-constrained; we are supply-constrained. We are shaping the demand such that it matches the supply optimally with a long-term view.”
Azure grew 39% in the quarter on a $93 billion run rate (compare to Google Cloud at 32%, AWS at 20%). But Nadella admitted it could have grown 41-42% with more compute. The constraint is real.
The Software Economics Problem
Nadella’s comments about SaaS disruption deserve more attention than they got. He’s essentially saying that the entire architecture of enterprise software is changing.
Old model: Data layer, business logic tier, UI layer, all tightly coupled.
New model: Data layer, agent tier (replacing business logic), UI layer that’s increasingly optional.
“Context engineering is going to be very important. One of the things I love about our Microsoft 365 offering is its low ARRPU, high usage. People are using it all the time, creating lots and lots of data, which is going into the graph. And our ARRPU is low.”
This is Nadella’s answer to why Microsoft 365 won’t get disrupted by AI: they have the data moat from high usage. More code generated goes into GitHub. More PowerPoints, Excel models, and chat conversations go into the graph. All needed for grounding.
But he’s also acknowledging a hard truth: “If you are high ARRPU, low usage, then you have a little bit of a problem.”
Translation: Most SaaS companies are fucked.
The entire value proposition of B2B SaaS was charging high prices for software with minimal marginal cost. Now there’s real marginal cost (tokens), and the business logic that justified the high prices is being commoditised into an agent tier that any competent developer can build.
According to their conversation, public software companies trading at 5.2x forward revenue (below their 10-year average) aren’t mispriced. The market is correctly pricing in that their moats are eroding.
Search vs. Chat: The Multi-Trillion Dollar Question
Gerstner asked the question everyone’s dancing around: search has extraordinary unit economics (fractions of a penny per query with massive ad revenue). Chat has terrible unit economics by comparison (real GPU cost per interaction).
Nadella didn’t sugarcoat it: “Search was pretty magical in terms of its ad unit and its cost economics. In this one, each chat, you have to burn a lot more GPU cycles. The economics are different. That’s why a lot of the early economics of chat have been the freemium model and subscription.”
The consumer advertising model that prints money for Google and Microsoft Bing doesn’t translate cleanly to chat. They’re still figuring out the monetisation.
Nadella thinks enterprise is clearer: “Agents are the new seats.” In other words, you charge per agent the same way you used to charge per employee. The consumption model works in B2B even if consumer monetisation is murky.
Consumer AI is still searching for a business model. Enterprise AI already has one.
NOTE: the next 2 parts are very America-oriented
The 50-State Patchwork Problem
The regulatory mess about to hit AI companies: Gerstner brought up the Colorado AI Act (full effect in February), which creates liability for “algorithmic discrimination” without clear compliance guidelines.
Altman was blunt: “I don’t know how we’re supposed to comply with that Colorado law. I would love them to tell us, and we’d like to be able to do it. But from what I’ve read of that, I literally don’t know what we’re supposed to do.”
The bigger problem isn’t Colorado specifically. It’s that federal preemption failed (killed by Senator Blackburn at the last second), so now they are headed toward 50 different state laws. Nadella called it out: “The fundamental problem of this patchwork approach is, quite frankly, between OpenAI and Microsoft, we’ll figure out a way to navigate this. We can figure this out. The problem is that anyone starting a startup.”
This is how you accidentally kill innovation while claiming to protect it. Large companies can afford compliance departments and legal teams. Startups trying to compete with OpenAI will drown in state-by-state regulatory arbitrage. (Note: Europe, I am looking at you)
The solution everyone knows is right (federal framework) appears politically dead. So they might get the worst of both worlds: regulatory burden without effective safety oversight.
Reindustrializing America (Or Trying To)
US tech companies are deploying $4 trillion in CapEx over the next 4-5 years. Gerstner noted this is 10x the size of the Manhattan Project on an inflation-adjusted basis.
Nadella described what’s happening around Microsoft’s Wisconsin data centre: “Most people think, oh, data centre, that is sort of like, yeah, it’s going to be one big warehouse and that’s fully automated. A lot of it is true. But first of all, what went into the construction of that data centre and the local supply chain of the data centre, that is, in some sense, the reindustrialisation of the United States as well.”
The Trump administration has been cutting deals specifically to support this buildout. South Korea committed $350 billion in US investments just last week. The focus is on power generation, grid infrastructure, semiconductor fabs, and the entire supply chain needed to build AI infrastructure at scale.
US companies aren’t just building domestically. Microsoft is the biggest investor in compute infrastructure globally. They’re bringing American tech (and American capital) to Europe, Asia, Latin America, and Africa.
This matters geopolitically. The country that controls AI infrastructure has leverage. China understood this years ago with their Belt and Road Initiative. The US is finally playing the same game with compute instead of traditional infrastructure.
Whether this actually leads to sustainable manufacturing jobs or just temporary construction booms remains to be seen. But the scale of capital deployment is real, and the strategic importance is clear to everyone involved.
The Productivity Story Nobody Wants to Say Out Loud
Nadella’s answer on headcount growth: “We will grow headcount. The headcount we grow will grow with a lot more leverage than the headcount we had pre-AI.”
Example: Microsoft’s network operations lead manages 400 fibre operators globally. Instead of hiring a team, she built agents to automate the DevOps pipeline. She didn’t fire anyone. She just never hired the people she would have needed.
This is playing out everywhere. Not “AI replaces jobs” but “each employee accomplishes 10x more, so we don’t hire.” Those jobs never show up in unemployment stats. They simply don’t exist.
Aggregate across the economy: top lines grow faster than headcount. Productivity per employee explodes. Margins expand. Great for shareholders. But what happens to the people who would have had those jobs?
Nadella’s optimistic take is that the IT backlog is infinite, so there will always be work. Maybe. Or maybe demand doesn’t scale with capability because the bottleneck isn’t coding, it’s knowing what to build.
Note, my take on this, but we will see :)
What This Actually Means
The compute race is infrastructure, not chips. Microsoft’s advantage is operational know-how: running fleets at maximum utilisation across workloads, geographies, and GPU generations. That’s why they maintain margins while competitors lever up.
Enterprise AI won. Consumer AI is still figuring it out. Microsoft 365 Copilot and GitHub Copilot print money with clear pricing models. ChatGPT has terrible unit economics and no path to profitability at scale for now.
Incumbents are extending dominance, not being disrupted. Microsoft is using OpenAI to lock in their enterprise moat for the next decade. The partnership reinforces existing advantages rather than creating threats.
But both are making massive bets on unproven assumptions:
OpenAI bets: Revenue grows fast enough to justify commitments, cost per token declines faster than prices, and they build a moat before commoditization.
Microsoft bets: AI strengthens (not commoditises) their software, the agent tier doesn’t disintermediate them, enterprises pay premium prices for intelligence they could build themselves.
If right: first $10 trillion company. If wrong: historic shitshow and capital allocation fuck-up.
We’ll know which in 18-24 months when the revenue numbers tell the story.
The most honest moment: when they admitted they don’t know when the compute glut hits. “Two to three years or five to six, we can’t tell you, but it’s going to happen.”
At least they’re not selling certainty they don’t have.
This is quite a long one. Spent a decent amount of time on it - despite the help of AI tools, hope you liked it as much as I did! It’s always interesting to listen to such episodes - even if you disagree with their views.
Until next time, Rachid




