The Netflix Playbook for When Your AI Provider Becomes Your Competitor
Windsurf lost Claude access with 5 days' notice. Your AI tool could vanish tomorrow. Here's the Netflix playbook that works: multiple providers, protected IP, and tested escape routes.
Hi there,
Today, I’m going to share some research and analysis on what we could call the ecosystem wars - where competitors are also clients and clients are also competitors.
Netflix pays AWS a substantial amount for cloud services each month, competing with Prime Video. Every episode you watch indirectly supports AWS's cloud revenue. Netflix knows these funds support their competitor, yet they continue paying because building their own global infrastructure would cost billions more.
You're facing the same situation with AI. If you're using ChatGPT, OpenAI might eventually compete with you. Building on Claude means Anthropic can see which markets are growing through usage patterns. Google has a long history of entering profitable markets they've observed through their platforms, from the EU Google Shopping case to launching Google Flights after acquiring ITA Software.
This isn't about data theft, since OpenAI and Anthropic don't train on business customer data by default. This is about dependency. Once you build on their platform, switching becomes expensive and complicated, giving them significant leverage over your business.
The Windsurf Example
Windsurf built its coding tool on Anthropic's Claude models, which were extremely popular with developers. Everything worked smoothly until talks with OpenAI progressed. Anthropic gave them approximately five days' notice before restricting access to Claude.
Within a week, Windsurf's service was disrupted. Customers were unable to use parts of the tool they'd paid for. Windsurf had no immediate backup ready. The OpenAI deal ultimately collapsed, but the damage had already been done.
Here's the twist: Anthropic has its own Claude Code product that competes directly with both Windsurf and Cursor. Windsurf wasn't just a customer negotiating with a competitor. They were a customer competing with their own provider.
This wasn't illegal or unusual. OpenAI's terms allow suspension "with or without prior notice" in some circumstances. Anthropic's commercial terms allow suspension under defined conditions. Check your own terms of service.
Your AI provider could decide tomorrow that you're competitive, that another customer is more valuable, or that your use case doesn't fit their strategy. If you can't switch providers quickly, you're vulnerable.
The best protection is running multiple providers simultaneously in active production. When one becomes unavailable, you redirect traffic rather than rebuild your company.
What Actually Works
Netflix runs on AWS but protects its differentiators. AWS handles servers, storage, and bandwidth. But Netflix's recommendation algorithm stays on their own systems, along with their content strategy and viewer data.
Netflix also built Open Connect, its own content delivery network, which gives it alternatives when AWS negotiations get difficult. Netflix became AWS's flagship customer, and AWS needs Netflix's reputation to sell to other enterprises. That creates a balanced relationship despite the competition.
Apply this to AI. You need the advanced capabilities that AI provides: complex analysis, pattern recognition, sophisticated automation. The trick is using these powerful features while protecting your unique context and methods.
Use AI for heavy lifting: data analysis, code generation, document processing, customer service automation. But wrap these capabilities in your own logic. Let AI do the processing, but you control the prompts, workflows, and decision rules.
For example, use AI to analyse customer feedback without revealing your segmentation strategy or pricing logic in the prompts. Use AI to generate code, but keep your architecture and business rules separate. Use AI for research and synthesis, but maintain your own knowledge base of insights and conclusions. Always audit your prompts to ensure they don't accidentally leak strategic details.
The key is treating AI like Netflix treats AWS: as powerful infrastructure you rent, not as the keeper of your secrets. Get the computational power without giving away what makes you special.
Run Multiple Providers Always
Start with two providers from day one. Use OpenAI for some tasks and Claude for others, perhaps with Google's Gemini handling specific functions. Build your system so switching is configuration, not code. Wrap provider calls with thin adapters. Every prompt should work across all providers with minimal changes, or you need to tweak and test accordingly in case you lose access to your preferred model endpoint.
This costs more and adds complexity, but when OpenAI raises prices or Anthropic changes terms, you'll have options instead of problems. For most AI users, this feels aspirational right now, but thinking about dependency risk today will help when you need it.
Test your switching capability weekly. Run canary traffic through a secondary provider. Maintain a golden set of prompts and evaluations for validation. If switching takes more than a few hours, improve your architecture.
I’ve been trying out Codex CLI and Gemini CLI as fallback options for Claude Code - and the performance deficits they have even with the latest models are quite disappointing! Claude Code is still the clear agentic coding leader in my opinion.
OpenAI's Strategy
OpenAI took billions from Microsoft but signed deals with Oracle Cloud in June 2024 and Google Cloud in 2025. While the OpenAI API remains exclusive to Azure, they've diversified other workloads. Not because they needed alternatives immediately, but because they might need them later.
When tensions with Microsoft became public in 2025, with reports of contemplated antitrust action, OpenAI had alternatives ready. Without those options, they would have been completely dependent.
Document what happens if each provider becomes unavailable. How long to export your data, adapt your prompts, retrain your team. If switching would take months, you need better preparation.
Do This Now
Most of you are using ChatGPT or Claude through a browser. You're not running complex systems. Here's what you actually need to do.
Get a second AI account. If you use ChatGPT, get Claude. If you use Claude, get ChatGPT. Use both every week so you know how they work. Try Gemini and Grok.
Keep your important prompts. Save the prompts that run your business in a document. Not in the AI tool. If you lose access tomorrow, you need these to rebuild elsewhere.
Test switching monthly. Once a month, do your normal work on your backup provider. You'll quickly learn what works differently and what needs adjusting.
Don't get locked into special features. Custom GPTs, Claude Projects, special integrations are convenient but they trap you. Keep your work as portable as you can.
That's it. You don't need complex infrastructure. You just need options and the knowledge of how to use them.
The Reality
OpenAI doesn't train on business API or Enterprise data by default, but can retain API logs up to 30 days unless you arrange Zero Data Retention on eligible endpoints. Anthropic typically deletes chats within 30 days, though flagged content may be retained up to 2 years. Enterprise plans can set custom retention.
Through systems like Anthropic's Clio, providers analyse aggregate usage patterns while preserving privacy. They see which industries adopt fastest, what features get used most, which markets are growing. They don't need your individual prompts.
Looking at AWS launching Elasticsearch Service in 2015 and DocumentDB in 2019, Twitter cutting third-party clients in 2023 (access was cut first, followed by rule updates later), and Microsoft's Copilot overlapping with OpenAI, there appears to be a 3-5 year pattern from partnership to competition. It's a pattern, not a law.
Your AI provider might compete with you eventually. Build with this in mind. Use them for standard work while protecting your differentiators and staying ready to adapt.
Netflix still pays AWS. OpenAI still pays Microsoft. They make it work by protecting what matters and maintaining alternatives. Success won't come from achieving independence. Success comes from managing dependency through multiple providers, protected differentiators, and tested switching capability.
Learn from Windsurf's experience. Build flexibility now, while you can do it calmly rather than in crisis.
Avoiding vendor lock-in requires careful planning. If you're reassessing your AI model dependencies or designing a more adaptable AI strategy, I can help translate strategy into practical implementation. You can book a call with me to discuss your specific AI transformation challenges.
If you have any comments on this, reply to this email or leave a comment below.
Regards,
Brennan
Great post