Why AI Winners Do More, Not Less
Focus wins. Except it doesn't anymore. Here's what Anthropic figured out that changes everything.
Hi there,
We've all been taught the same thing about business: pick one thing and be amazing at it. Focus wins. But something different is happening in AI right now.
Look at Anthropic. In mid-2024, Claude was a chatbot. Thirteen months later, it could write code, create documents, handle compliance work, and connect to company databases. They went from chatbot to core enterprise infrastructure faster than most companies ship a product roadmap.
According to traditional business strategy, this should fail. Companies are supposed to die when they spread themselves too thin. However, Anthropic continued to add capabilities, and each one seemed to strengthen rather than dilute its position.
What Anthropic Actually Did
Over the last 2 years, Anthropic rolled out a series of additions to its Claude model and the associated products. Connections to other services you use. Documents through Artifacts in June. Enterprise features with single sign-on and GitHub integration in September. The Model Context Protocol for system connections in November. Claude Code for command-line coding in February. Financial services tools with compliance features in July. 1 million token context window for Claude Sonnet 4 in August.
On paper, this looks unfocused. But watch what happened. Each addition built on what came before. Chat capabilities supported the coding features. Coding enhanced document creation. Documents made the enterprise tools more valuable. Instead of random features, these formed an interconnected system where each part reinforced the others.
Anthropic had advantages that most companies don't. Significant funding from major tech companies. Years of AI research preceded the launch of Claude. They expanded during a period of intense AI investment when organisations were actively looking for solutions. These factors matter when assessing whether their approach can work elsewhere.
What we can observe is that users often find applications to be used for purposes beyond their intended use. Coding tools are used for planning and operations improvement. Document features become knowledge management systems. Whether Anthropic anticipated these uses or discovered them through user behaviour, they've leaned into them rather than restricting how their tools get used.
The 2-of-3 Multiplication Test
If you're considering adding capabilities, here's a helpful framework to guide you. Any new addition should meet at least two of these criteria.
First, it should demonstrably improve your existing product in ways customers recognise.
Second, it should leverage infrastructure you've already built.
Third, it should increase the cost or complexity of switching to competitors through genuine workflow integration.
Consider customer service AI. Adding sales intelligence could make sense because you're processing the same conversations and using existing infrastructure. The combination creates more value than the sum of its parts. However, adding unrelated features merely creates complexity without yielding compound benefits.
There are real platform effects and network effects that come from building out adjacent capabilities. When you enable developers to easily integrate with your ecosystem, a significant amount of additional value can be created.
I’ve written previously about how, with each increase in model capability, many startups become unnecessary. Currently, when I watched the keynote release for Anthropic for Financial Services, it became clear that they are emphasising connectivity to existing incumbents through data and analytics offerings. Over time, as model capability and tool usage improve, how many other players will be required in that ecosystem?
Why the Old Rules Don't Apply Anymore
AI has revolutionised the economics of software development. Smaller teams can develop more ambitious products more quickly. The same foundation models can power different applications, from chat to code to analysis. You're not starting from zero with each new capability.
But integration remains complex. Each connection point requires maintenance. Documentation grows. Support complexity increases. Some companies are adding AI features reactively, responding to competitive pressure rather than a strategic opportunity. It's worth distinguishing between genuine capability stacking and defensive feature addition.
This is the real risk of operating close to the frontier of technology: there will be failures, and some will be very costly or embarrassing in terms of breach. The leverage of AI tools is present, but a significant focus on establishing clear guardrails and controls is necessary to maintain quality over time when operating at a rapid pace.
What Actually Happens When You Try This
Real capability expansion takes months, not weeks. You can start by identifying what your product does well today, and then explore other possibilities that your customers are already asking for. The key word is adjacent. The connection between current and new capabilities determines whether you're building a platform or just bundling features.
Build something basic to test the concept. Ship it to a subset of users and observe actual usage patterns. If adoption is low, stop. If users find unexpected applications, pay attention. These emergent uses often reveal whether you've built something that compounds value or just adds complexity.
The most valuable signals come from actual user behaviour. Support tickets requesting specific functionality. Workarounds users have created. Integration patterns with other tools. These indicate real needs rather than hypothetical opportunities.
If you look at YouTube, there has been an explosion of videos recently where users are leveraging the capabilities of Claude Code (it’s not just a coding agent!) and applying them to almost any business problem you can think of - from small business accounting and operations process improvement to designing and testing N8N automation workflows, and more.
The Signals Your Users Are Already Sending
When users consistently ask whether your product can do something slightly outside its current scope, they're showing you adjacent possibilities. When they build elaborate workflows combining your tool with others, they're revealing integration opportunities. When usage patterns diverge from the intended design, they reveal the actual versus the assumed needs.
But not all unexpected usage is innovation. Sometimes it indicates product confusion or missing core functionality. The distinction matters. Building on genuine emergent behaviour can multiply value. Building on confusion just adds complexity.
The Decision Every Company Faces
Not every company should expand capabilities. Focused tools can succeed when they solve specific problems exceptionally well. The decision depends on your market, resources, and the specific needs of your customers.
Anthropic had funding, talent, and timing. They entered a market ready for integrated AI solutions. Your situation might be entirely different. Your customers might prefer simple, focused tools. Your resources might be limited. Your market might not support platform pricing. These aren't failures, they're constraints that should shape strategy.
Your Next 90 Days
If you decide to expand capabilities, be realistic. Budget months, not weeks. Plan for integration complexity. Expect documentation and support needs to grow. Choose additions that pass the 2-of-3 rule: improve existing products, leverage current infrastructure, or increase switching costs through genuine value.
Build incrementally. Test with real users. Kill what doesn't work. Double down on what does. Watch for emergent uses but verify they represent real value, not just novelty.
Anthropic went from chat to infrastructure in 13 months. Whether this represents a new model for AI companies or a specific response to unique market conditions remains to be seen. The observable trend is that some AI companies are successfully stacking capabilities while others remain focused. Both approaches can be practical, depending on the context.
The question isn't whether you should copy Anthropic's playbook. It's a question of whether capability stacking makes sense for your specific situation. That depends on factors only you can assess: your customers, your resources, your market, and your timing. Make decisions based on those realities, not on what worked for others in different circumstances.
How I Can Help
If you’re ready to move from an AI transformation idea to executing on an AI transformation in your business, but need help working through the challenges you face, book an introductory call with me.
I have some limited availability to work with select clients, identifying the blockers to your AI transformation readiness and designing clear strategies to overcome them and start to realise the benefits of AI-enabled change.
Over the next few months, I will complete my essay series, which originally envisioned 20 articles (this is the 13th). I’d be interested in hearing from you about what else you think I should write about. Please let me know by replying to this email or dropping a comment or DM.
Regards,
Brennan
Great case study of Anthropic. Don't stop at 20 mate.