The Triple Boundary Framework
Learn the triple boundary framework for managing capability, learning & coherence
Hi there, welcome to my new subscribers! You’re getting this email because you subscribed yourself or another Substack recommended my newsletter when you signed up. I’m challenging myself for 30 days to post short AI-related videos daily. Follow along if you’re interested in video content. If you have someone who’d enjoy my newsletter, please share it. You can reply to this email or comment with any feedback you have for me - it’s much appreciated, Brennan.
Why will so many firms fail at AI transformation? The answer isn’t about the technology they’re using. AI models are improving every few weeks, and there are now clearer playbooks to architect AI-first workflows to solve problems better, faster, and cheaper.
The answer revolves around how firms manage their boundaries. These boundaries used to be limited by make-or-buy decisions, but now revolve around what firms can do (capability boundary), how they learn (learning boundary), and how they stay aligned (coherence boundary).
A significant challenge for businesses is the speed of change in AI. This isn’t just technology hype - things are moving faster than most organisations can cope with. They are stuck in pilot projects and proof-of-concept hell, but do not see the benefits from their AI spending other than through reduced headcount.
For example, if you were building any business around meeting recording, OpenAI rolled out recording features in ChatGPT this week that kill off much of that category. When they release AI wearables, even more AI wrapper startups will bite the dust.
I think winning in the AI era will require managing capability, learning, and coherence boundaries as if you’re managing a portfolio of bets: small bets on tactical decisions and big bets on strategic direction and taste.
Leaders must decide their risk appetite on each dimension and act accordingly. Their competitors will not be just their current peers but anyone with industry experience keen to take advantage of what AI-first thinking means for startups.
In this essay, I will explain my research and thinking around these boundaries, connect it to the academic literature where I can, assess what evidence I can find, and offer some practical guidance for handling the triple boundary problem.
The Capability Boundary
The capability boundary is the frontier of what a firm can do internally without external partners. Business owners know what their teams can do within the constraints they face.
The challenge with AI is that it’s not just about having the tools; it’s about making the make, buy, or automate decisions in a way that makes sense, given how fast AI tools are evolving.
Some capabilities are built on proprietary data or trade secrets, which you want to control as a source of competitive advantage. There are others where you have no advantage - you won’t build your own ChatGPT or Claude equivalent in your suburban office park.
This part of the portfolio requires clear thinking about a sustainable advantage in an AI world. Where previously outsourcing and offshoring made sense, more leaders might decide to automate these functions internally.
Where partnerships with technology vendors reigned supreme, these legacy arrangements might be broken down into modular components rebuilt internally with AI coding tools.
The Learning Boundary
The learning boundary is how far the firm can extend into new knowledge domains. This isn’t about e-learning modules or training courses. It’s about taking external knowledge (I need to use AI to automate this workflow) and turning it into real value (I can not only automate this workflow, but also improve the quality and deliver to customers faster).
Research & development and training budgets are minimal at most firms. Yet the heart of the learning boundary is curiosity. You either employ curious people or you don’t. If you have people pushing back on how AI can help them in the workforce, as of writing this in June 2025, you have a 5-alarm fire bell going off.
Another way to extend this boundary is through informal learning from “strategic partnerships” and hiring people with AI skills. However, the challenge of the AI era is that people with the best AI skills have already figured out they no longer need you, the employer.
Because AI is moving so fast, you are falling behind if you can’t take new AI features and quickly test and deploy them in a production environment.
It’s hard to articulate how serious this challenge is any clearer than that. If you are still “thinking about” your firm’s AI strategy, please reply to this email, and we can discuss some practical steps to get back on track.
The Coherence Boundary
The coherence boundary consists of the following: Can you align all of this rapid change with your strategy? This is about ensuring AI initiatives are helping your business win in the marketplace.
It goes beyond pure alignment to include the tension between innovation and coordination. Some of your people will have great ideas on AI, yet they will attract negative regulatory attention. Other ideas will be awesome, but according to some customers' expectations, the output quality isn’t quite there yet.
This means appointing a “Chief AI Officer” is a losing strategy. AI will impact every function and must be in everyone’s KPIs and incentives. Standard corporate boomer thinking on managing change will lead to commercial extinction over the next 4-5 years.
There is a need for governance, but the only path forward is trial-and-error from the top to the bottom of the organisation. Trying to centrally control and ensure every single AI initiative ticks all of the process boxes will kill innovation and hand enormous advantages to your competitors or AI-first startups in your industry.
I’ve written previously about how the target operating model transformation will be replaced with Operating Model Compression, and this war will be fought at the coherence boundary. When all functions can eventually be replaced by AI, supervised by a handful of skilled “gardeners,” the operating model needs a complete rethink.
Extending The Research
The triple-boundary framework isn't entirely new - it builds on decades of management research, but brings these ideas together in a way that makes sense for today's technology-driven organisations.
The learning boundary draws directly from Cohen and Levinthal's work on absorptive capacity. Companies need existing knowledge to recognise and use new information from outside.
The capability boundary extends David Teece's work on organisational adaptation. It explains how companies spot opportunities, seize them, and reorganise their resources to stay relevant.
The coherence boundary builds on research about strategic alignment, particularly Leinwand and Mainardi's work, which shows that companies perform better when their internal abilities match their market position.
What makes artificial intelligence different is how fast everything moves. Traditional theories assume companies could take their time, review things annually or quarterly, and adjust.
However, AI tools and technologies advance so rapidly that the landscape has shifted multiple times by the time most organisations finish planning a change. This time crunch means companies can't handle these boundaries one at a time or on a schedule.
When new AI technology emerges, companies must immediately decide whether to develop it, pay for access, or team up with others (capability). They must also learn to understand and use it (learning) and ensure it fits their future plans (coherence). The old luxury of step-by-step decision-making has vanished.
Most importantly, these boundaries connect and affect each other far more in AI than traditional theory suggested. Strong technical abilities without constantly learning new things means you'll quickly fall behind.
Being perfectly organised and aligned won't help if you can't adapt to changing technology. All the learning in the world is useless if you can't turn it into practical capabilities or connect it to your business goals.
The framework's real value is showing how these three elements work together as one system, not separate parts. Success doesn't come from being brilliant at just one boundary but from managing the push and pull between all three.
Identifying Failure Risks
There are many ways an AI transformation or the adoption of AI can go wrong. One is overextension, where you try to build all capabilities internally. You could end up with an unwieldy and poorly optimised portfolio that leaves you uncompetitive compared to peers who leverage the best-of-breed in each category.
Another risk is hollowing out risk, where you over-rely on external providers. Focusing on the latest thing in AI when you still have your thinking cap or training wheels on might prevent you from taking the first steps required, even to start getting benefits from AI.
Because of the constant stream of AI hype, information overload, and not focusing on your strategy, it can lead to insufficient focus on the projects that would move the dial. An obsession with governance, risk management, and processes can stifle innovation and let competitors race ahead or startups disrupt without the burden of these processes.
The worst position to be in is pilot hell or proof of concept hell where all these little bets on AI achieve nothing because the problem is the entire operating model and foundation of the business: it exposes the things not done properly over the last decade as you need high levels of technology capability and data quality to get the most out of AI transformation.
Finding Your Current Boundaries
You can figure out your capability boundary quickly: How long does it take to onboard a new hire? How long does it take to onboard a new supplier? Given the opportunity to use a new service via API key, how long does it take your team to make that happen in a production environment? How many times a day do you release code to a production environment?
If you smile to yourself reading these questions, you have a decent amount of corporate-world experience with technology and change, like I do! The challenge of the AI era is that all of these corporate hygiene processes need to be fixed before you are even at the starting line to get the most out of AI.
The learning boundary is also quite clean to diagnose: how good is your documentation? How long does it take to get a new hire operating at 100%, and how is that knowledge transfer happening? What percentage of your staff have patents or published papers in your industry? How good are you at delivering successful change? When your leaders see good things at competitors, how long does it take to do something similar or better?
The coherence boundary is a little more complex to diagnose - humans often lie to themselves about their capabilities. We sometimes need other people to tell us our weak spots. Many board directors and C-level executives have massive blind spots around the coherence of their firms. There can often be a rhetoric vs. reality gap. These gaps become immediately apparent in the AI era because you can’t deliver the benefits inside an incoherent firm.
Boundary Management Principles
People today decide whether to buy or automate a capability. Tomorrow, AI agents will make most of these decisions. The best place to start is with the easiest-to-automate capability, expanding outward over time.
Based on my experience in the corporate world, existing governance structures and processes will doom AI initiatives to failure before they even begin. Unless the mindset and openness to failure from trial-and-error are there, there won’t be enough room to move to make the bold decisions required to push these AI tools to their logical extent in business process automation.
The most challenging management problem concerns the speed of change and the risk of AI disruption not aligning with quarterly, annual, or 5-year planning cycles. AI models complete more complex tasks that they couldn’t previously deliver at an acceptable quality level every few months.
This means that the compressed operating model you’re designing needs to account for a shockingly high proportion of capabilities best done by AI rather than people over 5 years.
Why AI Transformation Is Hard
Another problem is the desire to control and measure everything that happens in a business. Many process-level KPIs become irrelevant as AI tools replace each capability, and many functional-level KPIs shift as more automation delivers the outcomes. If measuring success isn't carefully considered, AI implementation risks looking like the “closing a service desk ticket” problem.
Different industries will require vastly different levels of quality. Some processes may require a human in the loop for many years. In some industries, customers might pay more to interact with a human than with an AI customer support or account management.
Small firms, in particular, face a double-edged sword: they have the flexibility to take maximum leverage from AI tools. However, they face competition from larger firms that might be able to replicate so much more internally at a cheaper cost that entire categories of spending altogether decline.
Improving The Framework
The Triple Boundary Framework might be easier to apply to explain how the 2030 winners made it happen. It does assume a reasonably high level of capability and focuses on the strategy and execution of AI transformation. Many firms are still talking about AI and not taking concrete actions daily.
The boundaries between capabilities, learning, and coherence might bleed together more in reality, and the measurement and management challenges are very real. However, it is still a helpful way of thinking to help leaders figure out where they are and how to move from their existing operating model to a compressed one that is fit-for-purpose in an AI world.
The Triple Boundary Framework idea is that a firm is an integrated system. To win in the AI-era, you must know what you can and can’t do - and honestly choose between make, buy or automate with integrity.
An obsession with turning knowledge into concrete innovation and creating value will become the minimum. Ensuring coherence and alignment between a business's strategies will be more critical than ever before.
What next?
Since leaving my corporate job in February, I have researched and experimented with AI. One idea I keep returning to is that AI exposes what hasn’t been done properly.
Firms previously could gloss over issues in people, processes, or platforms. They could make things work by throwing money at the problem, outsourcing this or offshoring that.
The AI era is ruthlessly unforgiving. Many AI-first startups built on the back of the frontier model API endpoints are already gone. Their shelf life was as short as six months in some cases.
My essay series is expanding my thinking about this intelligent firm concept. I'm increasingly concerned when I read articles or listen to interviews about how many corporate leaders think about AI.
If you want to discuss this thinking in more detail, please reply to this email and let me know. I’m working on developing more structured diagnostics and playbooks. I want to help people think through the challenges they face around AI transformation and keep building out my thinking on this topic.
Firms must adapt faster than ever before. It’s no longer a hype: they must speed up every decision and get moving.
The time for pilots and proofs of concepts is over - it’s time to upgrade.
Regards,
Brennan
In military terms this is fighting for information reconnaissance .
Or movement to contact.
=====
Are you quite sure the learning boundary can even be respected unless learn by trial and error?
The evil flip-sides being ;
1) competence lost . Much knowledge has already been lost in the “upgrade” to new technologies. Seen it.
2) I have yet to see the tool managed competently by an incompetent. If you can’t fly an aircraft or navigate on the ground with a map… the latter will get lost, the former flies her helicopter into a passenger jet.
Such a sharp articulation of what many are sensing but haven’t quite framed yet. I’ve been writing about a similar shift, especially how curiosity, psychological safety, and emotional alignment are now prerequisites for real transformation. What this framework surfaces so well is that AI pressure doesn’t just expose technical gaps, it reveals foundational incoherence. That’s where the real work begins.