The Great API Bait-and-Switch
As AI agents reshape commerce, API dominance becomes the new path to monopoly.
Hi there,
Welcome to all my new subscribers. This week, we return to my weekly essay series where I’m building out a framework for thinking about the transformation required for businesses to win in the AI era. Last week, I wrote about the algorithmic war; today, I examine the composability of APIs and how this mega trend interacts with AI.
The Day the Bill Came Due
In 2018, Google Maps held an almost 90% market share, providing developers with the ability to embed maps on their websites and utilise map data in their tools. It ultimately led to significant price increases, shocking developers, but also highlighting the risks associated with a single vendor’s API endpoint becoming overly dominant.
The ease of use and composability that initially led developers to “easily integrate” with the Google Maps API ended up strengthening Google’s market dominance.
This commercial outcome wasn’t accidental - as AI, APIs, and AI agents become the core components of commerce, this is the new endgame for digital domination.
How Vendors Seduce, Scale, and Squeeze
APIs, or Application Programming Interfaces, enable developers to integrate new capabilities with minimal friction. Stripe built its business on making it easy for developers to integrate payment processing into their websites. AWS has built an empire of services that are accessible and composable via APIs, dominating the cloud computing landscape.
APIs were designed to facilitate easy integration by prioritising the developer experience. This has created enormous economic value because firms can focus on areas where they have advantages instead of building everything they use internally.
Over time, this has created enormous network effects. Most customers of enterprise software use third-party applications that use APIs to integrate with the core product. Slack has hundreds of thousands of custom applications, and developers using AI coding tools utilise OpenRouter or Requesty to access multiple model endpoints with a single API key in their workflow.
As more developers have created additional functionality, the number of users has increased, leading to further development of functionality by additional developers. Just like developers finding integration via APIs to be “easy”, so too are AI coding agents finding integration via APIs easy.
This is why I’ve been writing that making your business discoverable for AI agents is a key area of focus over the next few years, as more integration decisions are pushed to AI agents.
The squeeze comes when platforms, once dominant, start to exploit the dependency they’ve created. Once a firm has gone too far with one vendor, migrating away can be very costly. This is why trying to be as AI-model-provider-agnostic as possible is a design pattern that will save a lot of headaches over time.
The costs of mitigating single-vendor dependence include rework, data transfer, and egress costs, as well as the risk of losing functionality or impacting production delivery to customers. With a growing focus on operational resilience in regulated industries, many decisions previously left to technology leaders are now life-or-death commercial decisions, especially if a vendor experiences a cybersecurity crisis or operational failure during a peak period of demand.
Why Nobody Leaves the Hotel California
The reason a firm exists is that it’s cheaper to do things internally than send a task out to the marketplace. Transaction costs drive decisions, and the switching costs of moving from Vendor 1 to Vendor 2 can become prohibitive. This is the Hotel California trap you want to avoid if at all possible.
The challenge in the Big Tech and Big AI era is that there are only a handful of suitable and serious enterprise providers. The cost to stay is sometimes cheaper than the cost of leaving and the risk of disruption. You can see this in legacy technology decisions at banks or insurers - yes, they could rebuild everything from scratch. Still, they cannot afford regulatory sanctions because a migration from a mainframe to a vendor product built in Java went wrong. So they’re stuck.
There are also aggregation monopolies, where, because a first mover builds a decent enough API, they become dominant for an entire industry. Many fintech startups utilise open banking and financial data APIs, but there are only a handful of aggregators. They’re so powerful that Visa tried to acquire Plaid. Still, the DOJ blocked the deal, suggesting that a connectivity API has market power at the same level of concern as a traditional card payment network.
When you consider the significant hold Stripe has on payments in SaaS, it is substantial. Firms will stay and expand their wallet share with Stripe because it gets started easily, and the reality is that incumbent legacy providers are still years behind, as they have not yet prioritised the developer experience.
The AI-era shift is that we can see firms that have dominated because they optimised for developer experience. Extending that focus on the ease of use of API endpoints to ensure that any additional use cases and needs of AI agents are accounted for will underpin the next wave of dominance.
Fewer Players, Higher Stakes
Hundreds of billions of dollars are being invested in capital expenditure on AI. Data centres and training of frontier models are expensive. It’s reasonable to think a handful of frontier players will dominate, and the rise of the open-source competitors still requires enormous resources.
When only a few players dominate the foundation models, a significant amount of market power is concentrated in a handful of Large AI firms, including OpenAI, Google, xAI, and Anthropic.
The growing capability of AI models also means that data moats may not be as valuable as they might have been in the past, and users and distribution mattering more mean that network effects are already baked in, i.e. ordinary people often “use ChatGPT” to refer to any use of AI.
This new world of vertical integration and dependence on model providers means that each new upgrade in model capability eliminates entire categories of startups, and a significant number of tasks become more suitable for being pushed to an AI tool than for human workers.
The orchestration layer offers some hope for preventing lock-in. And the competitive pressure from significant improvements every few months, combined with the rise of open-source models, gives us the illusion of some choice. However, we could just be shifting the present and future antitrust issues to a different layer of the same ecosystem.
The New Rules of Power
Dominant players create new standards. These standards become de facto proprietary standards. For example, S3 is a storage service developed by AWS. Other cloud storage services mention “S3 Compatibility”.
If a platform controls a bottleneck component that integrators rely on, it can profit handsomely from this over time. Because customers value simplicity over customisation, if possible, and because of the complexity of change, this creates more lock-in.
Once a platform dominates in one category, it can start to monetise more aggressively. Antitrust regulators will take decades to catch up - just look at how long it has taken for Google to face severe scrutiny in its search and advertising business, which is over 20 years old.
The paradox of composability with APIs is that there is less lock-in at a system level, but not every option in the system is feasible. Only the top providers meet all the standards and criteria for selection as an enterprise provider. We have more choice in theory, but less in practice.
This is also the case with open-source AI models. Someone still needs to invest in the infrastructure required to operate them and support them at a profitable level. Even if you are running a local LLM model with LM Studio or Ollama to test the latest ChatGPT open-source models released this week, you are still spending on either dedicated hardware or the opportunity cost of using those system resources for something else.
Innovation is happening at an increasingly rapid pace, but power is concentrating at an even faster rate. This means that the question that matters more is not whether a focus on APIs will lead to greater ease of use and expansion of functionality, but how much control is in the hands of the connection points.
The same tools that give us flexibility also run the risk of lock-in and dependence. Many of these issues are openly discussed in the AI safety and AI risk communities. Still, there is little to no discussion about the consequences of Big AI dominating the market over the coming years. Due to the incentives faced, attempts at regulation, such as those by the European Union, have had little impact in mitigating these risks.
What Does It Mean For My Business?
The main takeaway for business owners is that APIs are a double-edged sword. You both want and need easy API access for tasks you want to complete faster, better, and more efficiently. You also want to offer easy API access for your customers and partners to interact with your business machine-to-machine.
This means that, as part of your AI transformation, building out API endpoints and making it easy for AI agents to interact with your products and services will become a key differentiator.
It also means that when building your platform, you should strive to minimise vendor lock-in as much as possible. We are already seeing the impact, as Anthropic’s API endpoints are reportedly experiencing issues due to very high demand. As more critical services begin to depend on AI provider APIs, the demand for resiliency will increase even further.
If you’d like to discuss your AI transformation with me, you can book a complimentary call to identify blockers and develop strategies to overcome them.
Regards,
Brennan
P.S.: I’m going to be posting more long-form video content on YouTube, creating more accessible versions of this content on Substack and sharing more practical strategies for AI transformation.


