The playbook for building an intelligent firm
AI agents are drastically lowering coordination costs. Here is how to rethink your operating model and move past blockers.
Hi there,
In today’s newsletter:
The Anthropic and SpaceXAI deal
Why the theory of the firm needs an upgrade
The playbook for building an intelligent firm
The Anthropic and SpaceXAI deal
If you’ve been using Claude, Claude Code or Claude Cowork over the last few months, you know that at times it just doesn’t work.
The rapid growth in use of Claude has led to enormous pressure on Anthropic’s ability to service that demand with the supply of compute it has access to.
As you can see below, in an industry where multiple nines is considered good uptime, Anthropic’s product suite has not had good uptime.
Dario Amodei revealed that in the early part of 2026 they were experiencing 80x growth when they had planned for 10x growth.
Times like this are completely insane. You have to give them some indulgence. OpenAI is not in the same position. Nor is Google. From the perspective of enterprise customers, uptime at that level would immediately disqualify most vendors from even being in contention or getting past procurement hurdles.
Fortunately for Anthropic, they are selling one of the best implementations available of general intelligence. They can be consumed across a wide range of products or via API endpoints. They charge a premium. This is especially true for their top model, Opus 4.7. Their recent marketing of the Claude Mythos model has caused a panic in governments, corporates, and regulators around the world.
I think a lot of people forget that if you go beyond the headline, a lot of the stories that get written about AI miss the wood for the trees. ChatGPT 5.5 is already benchmarked to have similar or greater capability on cyber than Claude Mythos.
For enterprise customers, especially big ones, despite the preference of developers and non-technical users alike for Claude, what we’re seeing here is a need for a multi-vendor strategy.
How would you feel if the chart below was powering most of your operations? You would be pretty filthy. This is especially true if you are in a regulated industry. Any outage at all needs to be reported to the regulator. It also comes with a lot of issues. This includes the consequences of making those sorts of notifications.
All this background is important to understand what’s actually going on when Anthropic does a deal with SpaceXAI. They are growing so fast that they need compute and they do not care where they get the compute from. Almost all of the deals they have announced so far in 2026 are for capacity. That capacity is yet to come online with other partners.
SpaceXAI effectively had an under-utilised capital asset that they had spent a fortune building and using, sitting ready and as a “real option”. The deal Anthropic struck gives them access to the entire Colossus 1 data centre with something like 220,000 GPUs of various types originally focused on training earlier Grok models.
I was astonished to see how bad so many of the takes were on X around what this means for SpaceX AI, what this means for Anthropic, and what this means for Grok. Colossus 2 is an even bigger and better data centre where more Grok models are still being trained. Some analysts have estimated that the $5 to $6 billion per year in GPU-rental revenue that Anthropic is likely to be paying SpaceX AI will roughly offset the annual costs of SpaceX AI itself.
So what’s actually happened? Anthropic gets a short-term immediate boost to their compute capacity. This is why they are able to slightly ease the limits. They are still facing limits, though, due to the enormous continued growth of their business.
SpaceXAI is part of SpaceX. It is going to IPO shortly. It will get billions of dollars in high-margin revenue. We will soon know through the SEC filings roughly what Anthropic are paying. SpaceX will have to disclose a whole new category of hyperscaler revenue.
This will attract a different multiple from other parts of SpaceX revenue. It will also give more debt servicing capacity to build things like the TerraFab. Or more data centres for SpaceXAI.
What does this mean for AI transformation? You need to understand what is happening at the top of the supply chain. This is important if you are going to build a business that is sitting on top of Frontier Labs building the AI models and an ecosystem of data centres serving the compute that you rely on.
Just like you would expect your chief technology officer to be well-versed in every single little thing that is critical and moves the dial for your AWS, GCP, or Azure reliance, leaders in business need to know what is happening under the hood of whatever API endpoint you are relying on. Who is serving their model, and where are the dependencies?
Why the theory of the firm needs an upgrade
Understanding the physical reality of this supply chain is the first step, the next is restructuring your firm to actually capitalise on it.
An operating model is just people, processes and platforms. The coordination of all these factors used to be a heavily human task. The theory of the firm is a concept that I have written about extensively over the last year.
As part of that research, some of the thinking I have done is about what changes as AI capability rises. The need for verification and accountability rises, and a firm faces a choice between allocating tasks to humans, humans using AI, AI agents themselves, or external suppliers.
There are a number of economists doing good work in this space. One important thing to remember is that there is a lag between how fast AI is changing and the pace at which widely held beliefs in a profession like economics update through the process of journal articles, peer review, and conferences.
The intelligent firm, where more and more decisions are removed from human oversight to decision-making by AI agents inside guardrails placed around them, presents a real dilemma.
All of this might seem a bit over the top and not worth thinking about if you’re running a function, running a business or an AI project. Yet, I think what is being revealed when people have simplistic reactions to the SpaceXAI-Anthropic thing, saying things like “Grok is over” or “Anthropic shouldn’t have given money to SpaceX,” is the next level down of thinking about how the ecosystem of your supply chain works.
We’re all building on AI API endpoints, and as part of the duties we owe, we need to be going deep. We cannot be cosmetic in our thinking and application of AI.
In the next section, I am going to share some more of my thinking. This thinking is around all of the different levers that a firm has in its operating model. The goal is to become more intelligent. It is moving that decision-making and strategic layer from human to machine, and getting the absolute most value from AI and technology.
The playbook for building an intelligent firm
Keep reading with a 7-day free trial
Subscribe to Getting AI To Work by Brennan McDonald to keep reading this post and get 7 days of free access to the full post archives.





