We live in the AI era. The liquid organisation is one idea about how fluid the structure and boundaries of what we currently think of as a business might become because of AI, APIs, and AI agents.
Over the past few weeks, I've written about how tasks can now be completed with either time or tokens, why operating model compression is required to stay competitive, how the choice to make, buy, or automate will evolve, and what an intelligent firm could look like.
This article is the next in my essay series and proposes the concept of a liquid organisation. It incorporates some of the ideas I’ve already written about, with some grounding in practical experimentation and academic research in industrial organisation.
What is a Liquid Organisation?
Here’s a formal definition:
A firm designed to autonomously and continuously reconfigure its people, processes and platforms
A liquid organisation requires knowledge of and decisions about firm boundaries regarding capabilities, learning, and coherence.
A few core principles can help us relate this firm style to what we already know about how firms make decisions and structure themselves.
Modularity is key, as capabilities are exposed as both internal and external APIs. This is required because if there isn’t a beautiful API, the AI agents won’t be able to discover and communicate with the capability.
The flexibility of dynamic teams, where role definitions are fluid, talent focuses on problem-solving, and the concept of reporting lines and hierarchies is treated like the legacy technology it is.
Optimal allocation of decision rights based on expertise, access to data, and efficiency is required. Again, hierarchical decision-making won’t work because the communication overhead alone will be too slow to keep pace with the required changes for competitiveness.
The idea of a steady state is dead. Perpetual adaptation is required; strategies and structures will evolve quickly, and all processes will align with the new operating cycle. You won’t survive with an annual planning and budgeting regime.
None of this fluidity is possible without clean and transparent data, as well as the best technology infrastructure. A firm that hasn’t done the work won’t be able to leap into a liquid structure. They’ll first need to remove the blockers.
Theoretical Foundations
The pressure on transaction costs from AI is to significantly reduce them, potentially to the marginal cost of computation in the form of AI tokens.
The rise of APIs means that if a capability is exposed as an API, it is easily discoverable and consumable for completing tasks either within the firm or through the external market.
The rise of AI agents means that coordinating all of this will become more efficient over time, as the best way of completing a task can be decided in near real-time based on the constraints imposed on the AI agent.
Teece introduced the concept of dynamic capabilities, where a firm is in a loop of sensing, seizing, and transforming opportunities. This means they are always on the lookout for ways to do things better, faster, and more cost-effectively.
This cycle of constant improvement is the heart of a liquid firm. It’s challenging to convey to an executive entrenched in a mindset of structured quarterly planning cycles just how fast-paced this will become over the next few years. You may have just hours or days to deliver a coherent response to a competitor's move.
These competitive moves, regulatory changes, and technology shocks are breakpoints where all the small changes accumulate to a critical threshold. Now it makes sense to automate the last mile of manufacturing, or close the offshore processing centre, or disestablish the entire function that is now performed by AI tooling.
Many corporate responses to this type of thinking have fallen short. Why? Because humans are still involved. It doesn’t matter if you make your technology function run on a scrum model, if you are stuck in 5-year corporate strategy and planning loops.
There are already numerous fast feedback loops in business. Technology businesses launch and acquire millions of customers, while a trading platform loses millions in a matter of minutes. The mindset shift in this AI era is that everything in a business is now tracked in real-time and monitored by your friendly AI agent.
Again, much of this already occurs - you have monitoring software, platforms that automate logistics and manufacturing processes, and cybersecurity monitoring. The gap I’d like to remind you of is that, over time, every task your business does will be measured and monitored by AI.
Much of the existing academic theory still assumes humans in the loop, making the boundary decision points. This will only hold for a few more years: at some point, it will be regarded as unprofessional, unethical, or a breach of fiduciary duty for a human to decide how to structure a process or design an operating model.
If you think this is a crazy notion, consider researching shareholder class actions and some of the issues that big corporations have already been sued for and had to pay out millions or hundreds of millions of dollars in damages.
When the monetary consequences of not adopting AI-everywhere shift towards the survival of multi-billion or multi-trillion dollar market capitalisations, how do you think corporate leaders will respond?
AI as Structural Enabler
If AI is just a technology, it’s pretty cool when you break down how it can help businesses do things better, faster, and cheaper. There will be societal consequences of these changes, but it is better to be open and honest about the magnitude of this change.
Coordination automation is a significant capability, as it enables the discovery of information, allocation of tasks, monitoring of performance, and leveraging of APIs, all of which shrink the span of control requirements a manager has. Much of this already exists in areas like industrial automation and real-time logistics. AI expands this into areas traditionally held by highly skilled workers.
The corporate world has a history of restructuring and redesigning target operating models. These transformation programs can take years and cost millions of dollars. The AI-era pushes the vision and execution of these restructurings down to the machine level. Bottlenecks and skill mismatches are identified constantly, and action is taken.
One of the darker patterns that might emerge is the creation of internal marketplaces that resemble Amazon warehouse employment or Uber driving. Task allocation by algorithm, forced to wear an AI wearable to record everything, continuous monitoring and surveillance with aggressive KPIs. This won’t be for warehouse workers - this will be the life of many university-educated “high skill workers”.
This is connected to the “API economy,” where every capability is an API endpoint for AI agents to investigate and utilise as they see fit, making the firm's boundary very flexible. Already, freelancers and contractors are treated as “on-demand” resources. Gig work and insecure employment already exist at high rates across the world.
The shift here is that firms themselves must merge into an API endpoint to become discoverable. For instance, if you don’t have an internet presence or social media presence as a brand, it will be challenging to attract people to your store.
Splitting a project into tasks will be done at scale, and each business will decide on whether it prioritises cost, efficiency or reliability. This is already emerging in how we manually select different AI models for different tasks. I might choose o3 for a shopping query and Claude Opus 4 for a complex software design question.
The difference is that all these small decisions, previously made with human judgment, will now be made by algorithms. Policy engines will enforce compliance guardrails and privacy protection.
Implementation Steps
The first starting point is understanding your capability boundaries today. How mechanised is your operating model? Ideally, you already have your technology leveraging a microservices architecture and have nice, clean data to work with. This is all fixable, though - you need to spend and make tough decisions.
AI works best with clean data. Meta just paid billions of dollars for a data firm to help it in the race to superintelligence. If you are still in the email, spreadsheet, tacit knowledge era, this is an urgent agenda item to fix.
One use case that could work well, particularly for a firm with a technology function, is bug fixing. It’s entirely plausible now to set things up. Hence, customer complaints → issues in Linear → Claude Code picks up new issues under a given complexity threshold → fixes bugs → senior engineer reviews → merge to production.
Over time, you can increase the complexity of issues that flow to the coding agent. Over time, you increase the complexity threshold where you need a human in the loop.
This changes the sorts of people you want to hire in the business. Problem solving and critical thinking are more important than ever. If someone needs to be told what to do or given a process or checklist, it will be highly challenging for them in the years to come. The skill bar to justify a wage or salary is ratcheting up.
Over time, an increasing number of tasks require autonomy to make decisions independently. We already live our lives dictated by algorithms, and the rising AI capability pushes this down to the task level inside businesses.
The primary way to become more liquid as a business is to capitalise on “step change” opportunities as they arise. For example, providing everyone in the business with access to the latest AI tools safely, with just the critical safety guardrails to protect your data, is a straightforward win.
Risks, Constraints, and Mitigations
Change fatigue is one of the most significant challenges in the field of change management. Multi-year regulatory remediation projects and restructuring every six months drain the energy and vitality from workforces. The bad news is that the next few years will be even worse because of the incentives that AI injects into hyper-competitive marketplaces.
There is already an undercurrent of pushback against “black box” decisions in all areas of life, and a focus on privacy and data protection. This means that the explainability of decisions and the exposure of thinking processes will be something regulators and stakeholders start to expect from businesses.
One prediction I make in this area is that many business decisions impacting customers that were previously regarded as internal proprietary information will shift to being expected to be shared with the impacted customers.
This will be extremely painful and challenging, especially in regulated industries. Still, trust levels will be so low that firms will have no choice but to become much more transparent and open about their internal workings.
Many existing regulatory structures, such as data sovereignty, labour regulations, antitrust rules, and privacy laws, will create breakpoints where “more automation” clashes with existing laws.
Compliance embedded in code is a good idea, but it is highly challenging to implement correctly. Here, I think Big AI will start to lobby against many existing worker protections and privacy protections. The scale of the potential cost efficiencies will create lobbying incentives that have never been seen before.
Within large corporations, pockets of resistance will likely exist. It will take many years to reduce the need for “humans in the loop”, but over time, as AI becomes more powerful, many of the arguments made over the past few decades to increase regulation will be repurposed to advocate for “trusting the machines more”.
There is no doubt that human judgment will still be required for many tasks across the scale of human civilisation. Over-automation risks blind spots, but we only need to look at how outsourcing, offshoring, and current automation have progressed without stopping to realise that this is happening, whether or not you think we can field some societal response.
Leadership Playbook
This article contains a significant amount of theory and speculation. But how do you win? The more I research and experiment with AI, the more I realise that nothing matters more to a business in 2025 than spending on getting the digital plumbing right first.
Most projects fail. Most AI projects will fail. We know this has nothing to do with the technology itself. This is all about the hygiene of the business running the project.
Clean data, transparent processes for managing technology, clear governance and controls, a clear strategy, and a “North Star” to guide the business forward. You have to build the foundation now to succeed tomorrow.
If you didn’t take technology seriously, you are now in catch-up mode. This is challenging, but the cool thing is that instead of jumping straight to the fun AI workflow automation, you can utilise AI tools to help refine and optimise your existing technology stack and business processes.
The old way of transformation was burdened by many leaders worried so deeply about risk to the point that poor decisions were taken - lift-and-shift of manual processes to the cloud resulting in bill shock, choosing a vendor because they are big when your business required a much simpler solution, or listening to name-brand “prestigious” consultants.
What is most amusing to me is that many of the winners in the AI era will either be AI-first startups that don’t need to worry about cleaning up the legacy burden before getting to the starting line, or firms that are already excelling with technology.
There is a path to turn this around - I have some diagnostics developed to help AI transformation work. A significant portion of it leverages standard program management and change management principles, as well as my previous work on AI transformation and adaptation.
The challenge is that the pace of feedback loops must be at least 10 times the typical corporate operating speed. And that’s the challenge - a lot of people are talking about AI transformation and their pilot projects, but forgetting that AI surfaces all the technology debt you haven’t paid off.
The truth is complicated - every business can devise a winning game plan to address the issues hindering its progress toward a successful AI transformation. It just requires a sense of urgency and a willingness to spend what it takes and make tough decisions to solve problems as they arise.
What’s next?
I've developed an AI Transformation Readiness Diagnostic that cuts through the consulting theatre and measures what matters for liquid organisations. The diagnostic examines how modular your capabilities are - not the architecture diagrams you show investors, but whether your AI agents can discover and use them.
It tests whether your data is genuinely usable or still trapped in spreadsheet purgatory. Most importantly, it identifies exactly where human friction is creating bottlenecks that will hinder your progress, allowing competitors to move at machine speed.
I don't have any generic insights or sales pitch - just specific analysis of your situation. I aim to augment my AI research and experimentation with fieldwork, such as this, among business owners and leaders who are actively thinking about these challenges.
Here's how it works: Reply to this email, and I'll send you a link to a Gemini Canvas app diagnostic I've created. It takes about 10 minutes to complete. You can download the output as a PDF for your records and to email back to me.
Once you return the downloaded PDF with your responses, I'll analyse your responses, and if you’re interested, we can schedule a 30-minute call to discuss what's blocking your AI transformation.
The diagnosis reveals gaps between where you are and where you need to be. Not another maturity model that tells you what you already know, but a clear view of which specific barriers will matter when AI-first competitors emerge.
If you're wrestling with AI transformation at the strategic level, please reply, and I'll send you the diagnostic link.
The firms that think they're "doing the right thing" with their governance committees and pilot projects are the same ones that will announce layoffs when an AI-first competitor completes its AI transformation.
It would be better to know where you stand and make the choice to win.
Until next time,
Brennan
Another BANGER.
I feel like saying so much, but really dont want to put together a comments section essay! Lol
The term "liquid organizations" has already been in use since a decade or so. I'm not sure if you intentionally want to align yourself with what others did under this name before. Else it might be confusing.
But the ideas a solid. It resonates with the metaphor of shapeshifters, which I've used in my presentations before. All organizations will need to become like T1000. 😄