The AI control trap (and how to escape it)
Your teams are already using AI. Here's how to catch up →
This week, I’ve been researching the control paradox.
The Control Paradox in 60 Seconds
Your bureaucracy operates at human speed, AI at machine speed
Your teams are already using AI (with or without your permission)
More control = less innovation. Tradeoffs are real.
Four governance models work: destination-setting, boundaries, machine governance, and shape-shifting
Start small, fail fast, scale what works, and disrupt yourself before your competitors do
Traditional top-down management controls, originally intended to ensure alignment between strategy and execution, can undermine performance and innovation.
The firms congratulating themselves on developing detailed governance methodologies and frameworks are the ones more likely to fall behind.
This is because during the time they spent sending drafts of AI risk frameworks back and forth via email in Microsoft Word format with track changes enabled, their competitors were building and learning from their failures.
You need some guardrails - but just enough, and no more, because you can’t afford the tradeoffs in the long run.
The question for every business owner and leader in the AI era is: How much control is too much?
Machine Speed vs Management Speed
The reason why traditional hierarchical management fails at AI speed is that communication and approval overhead are too slow to maintain competitiveness. AI operates at machine speed, bureaucracy operates at human speed. The real cost of excessive control is innovation grinding to a halt.
Power is shifting to technical folks with expertise and an understanding of the top business problems to solve in their industry. I’ve written previously about how I believe technical executives will become dominant in the C-suite, as many other business unit and functional leaders will fall behind on the technology front and add less overall value to the firm.
The way to fight this trend is to experiment with the paid AI tools yourself on your own time, so you develop the right instincts for what works and what doesn’t in the AI world.
This will lead to a manager identity crisis of sorts - they will experience a loss of control when a Gen Z engineer can rebuild 80% of the capability of their entire function in a week or two of vibe coding.
They will need to learn to let go of and abandon outdated notions of office attendance and micro-management, because the reality is that as more people realise the leverage AI gives them, fewer and fewer skilled workers will be willing to work for others at all unless they must.
The managerial crisis driven by AI will lead to a shift from command and control to coaching, from decision interrogation to decision enablement, and from top-down control to harnessing the benefits and managing the risks of bottom-up innovation.
Your Teams Are Already Using AI
The AI, API, and AI agent economy changes how quickly you can fail and how fast you can learn. AI is already integrated into your processes without you knowing it, unless you haven’t already provided your team with safe ways to utilise AI.
They chat to their personal ChatGPT, take screenshots of whiteboards and ask for graphics and leak everything to one of the AI models already, even if you’ve banned AI in your workplace.
No one, especially younger generations, will tolerate being forced to go through slow processes and perform tasks manually when better, faster, and cheaper options are just a browser tab away.
This means that the research, report, approve, and execute cycle doesn’t work. In-flight experimentation, with all its associated risks, is likely already happening in your business. A challenge to you is: how do you move AI experiments to a more controlled yet permissive framework?
This is a global trend occurring across cultures, with an increasingly universal pressure towards distributed authority in the workplace. Leaders need to consider how to capitalise on this opportunity and succeed, with “just enough” safety guardrails in place to protect data, mitigate issues and manage risk effectively.
What The Research Shows
A BCG study found that micromanagement stifles creativity and innovation, preventing agile teams from feeling empowered. McKinsey data showed that well-designed guardrails around AI improved the odds of an AI project resulting in cost reductions. A Deloitte report found that organisations with structured yet light governance were more likely to succeed in their AI transformation.
We don’t need brand-name consulting research to think clearly about the control paradox. You only need to think about any time in your professional life when a firm’s processes led to a silly outcome. Many of us have these war stories, and with my experience in the technology and change space, I have many of my own.
More management control is not always better. In AI-first organisations, it can be deadly. There’s a sweet spot, where you have guardrails to avoid the worst issues and know where to go instead of trying to plan, predict, and control every step of the journey.
There is a tradeoff between control and performance, and I’m growing tired of some commentators not wanting to acknowledge that these tradeoffs exist, as well as the cultural issues surrounding innovation that stem from focusing too heavily on control and compliance.
I’m not arguing that we don’t need some governance. If there’s no oversight, the risk of data breaches, privacy breaches, cybersecurity breaches and biased AI outcomes is almost guaranteed. Freedom with accountability, or just-enough-failure, might be better ways of understanding this midpoint.
In the short videos I’ve been publishing on TikTok and YouTube, one theme I keep returning to is the idea that many projects fail, and many AI projects will fail.
There’s a whole body of research and lived experience that tells us that doubling down on more controls when something goes wrong often leads to the opposite intended outcome: you kill culture, instil fear, and increase the likelihood of even worse outcomes because no one feels safe to experiment or admit mistakes.
The Governance Trap
Early choices in an AI project have a lasting impact: the culture and organisation memory of every business act as carriers of path dependency.
The lessons learned from past mistakes and the beliefs of senior management about why something went wrong can be considered more important than the reality of gritty trial and error at the coalface.
There’s also a timing trap. If you put too much control around an AI project at the outset, it’s like a startup's death. It never gets off the ground, as no one wants to discuss it in six months, because the AI Project Exceptional Governance Committee meeting was never held.
If you wait too long and try to wrap controls around a winning bet AI project, you could find yourself trying to harness chaos and reversing some of the very decisions that enabled the success.
This is why it’s better to think carefully about your AI strategy and undertake small projects to gain a sense of what works and what doesn’t in your business before moving on to a comprehensive AI transformation program.
There are also different cultures across various industries, such as advertising versus banking, construction versus healthcare, and the public sector versus the private sector. All of these environments will require tailored approaches to ensure that sufficient thought is given to the unique issues the industry will face as it progresses through AI adoption.
Four Different Governance Models
Set The Destination, Not The Route
This is where leaders set vision and KPIs, but the teams own execution. There are checks and balances in place to ensure the AI strategy makes sense and that what is being delivered supports it.
Boundaries Without Bureaucracy
This is where technical standards, ethical guidelines, budget limits, and safety protocols are clearly defined upfront. This enables innovation within boundaries without micro-management or financial control, preventing you from spending millions on API credits without resorting to task-level operational control.
When Machines Monitor Machines
The intelligent firm concept involves AI serving as a governance mechanism, with automated compliance checking, real-time monitoring, and optimisation based on machine logic.
Shape-Shifting For Success
This is the liquid organisation, where firms start transforming themselves to suit the set of tasks to be completed. The right to make decisions sits with the human or machine best placed to make the call.
Decision-making moves away from the management structure to the algorithm. There are internal marketplaces for everything, and minimising loss inside the guardrails is a core focus.
From Theory To Practice: Getting Started
Take Your Control Temperature
How much control do you need? You'll need to determine your comfort level with the risks that AI projects can pose. What are the non-negotiable guardrails? What is the worst front-page headline your firm could live with?
Pick Your First Experiments
Could you identify a team or process that is low risk enough to experiment with? Let them develop, test, and deploy as many AI tools as they deem necessary within the guardrails. Build the supporting infrastructure to enable your technology function to gain more experience with what works and what doesn’t in your environment.
One Size Fits None - Sector Strategies
Heavily regulated industries will require more governance and guardrails; that’s a given. Tech companies will have guardrails, but will also have more autonomy at the team level to identify and experiment with AI tools.
Traditional firms are likely to need digital teams to help navigate the unfamiliar. AI-first startups that build flexible governance and guardrails from the outset can dominate their industries due to the speed at which they can deliver better, faster, and more cost-effective outcomes for their customers.
The Manager Identity Crisis
Many managers will have anxiety about losing control and not being able to dictate exactly how AI experiments are conducted. Some won’t be able to adjust, while others will be able to evolve their experience into enabling-type roles.
Trust between management and workers will need to be built up over time, as open and honest conversations about what AI means for total employment numbers must be shared. There’s no point in sugarcoating the risk of AI job losses, given that every second AI article discusses the potential for significant job losses.
Warning Signs And Escape Routes
Over-correction, where you swing from total control to total freedom, is too extreme for any workplace. Gradual, measured changes make more sense. When something goes wrong, avoiding outdated managerial techniques like blame and gaslighting is more critical than ever.
Younger generations will no longer accept managers for what behaviour was once standard in corporate environments, so please take the necessary steps to calibrate your management style for 2025.
The risk of fake decentralisation is particularly challenging for those who don’t want to relinquish control. You can’t claim your teams are free to experiment while imposing hidden controls and not respecting clear boundaries for trial and error.
One reason some firms will struggle in the AI era is that they already have a poor culture - AI cannot fix this. Instead, AI projects will expose the rot and highlight the work that needs to be done to build a safe environment for making mistakes and learning from them.
A further risk when the firm’s primary focus is on governance and controls is that compliance metrics can overshadow business metrics. These matter, but not nearly as much as the importance of using these AI experiments to deliver concrete business results in the form of faster, better, and cheaper outcomes.
When considering your AI strategy, managing the risks of problems such as hallucinations, bias, and decisions that reflect raw machine learning optimisation thinking rather than human concerns is one of the toughest areas to navigate.
My take is that it’s better to be open and honest with your stakeholder community than to be secretive like a traditional corporation. Share that some job losses or job reallocations are inevitable, and plan accordingly. Be generous with your redundancy packages, but ease up on the corporate spin about AI “augmenting” or “creating new jobs”.
There’s also an ongoing need to monitor the regulations that’re implemented as the broader societal consequences of increased AI adoption unfold. This will vary across different industries, but it’s essential to remember that, given the rapid pace of change, you’re not dealing with a typical technology project. You’re entering into a process of continuous evolution and investment in staying ahead of the curve.
Finding Your Balance Point
Freedom and control often go hand in hand. Effective governance doesn’t mean micro-managing every detail. It means designing reasonable guardrails and letting teams figure things out on their own. Mistakes will be made, projects will be written off, and scandals will occur.
The cotton wool commerce mentality that has come to dominate much of the corporate world is an anti-pattern that dooms many AI transformations to fail before they even get off the ground.
The competitive reality is that AI equals a thinking upgrade in how you approach your operating model compression. Every person, process, and platform needs to be assessed through a lens of “what is the optimal way to deliver equal or higher value through leveraging AI more for those bundles of tasks”.
AI-first startups and competitors already on their AI transformation journey are building a muscle internally that enables them to make better AI-related decisions over time as they learn what works and what doesn’t work for their business.
You'll need to loosen the control starting today. Understand your current capabilities. Start with small bets and small experiments. Let teams take risks inside the boundaries you define. Promote successful tests to a production environment. Get feedback from customers or suppliers. You have to start building today.
The companies that thrive and survive in the AI era are those that figure out how to handle the control paradox in the best way for their unique needs. This is where everything is headed, and you can choose to win if you get the proper support and invest what is required.
Pinpoint your AI transformation blockers
Schedule a 30-minute AI Transformation Gap-Analysis Briefing with me and receive a one-page Diagnostic Report, including your readiness score and quick wins. Book the call now.
Would you prefer to discuss something else or provide feedback? Just reply to this email.
Regards,
Brennan
Indeed - if the world now needs lots more energy, Australia is well-placed to provide it.
Be generous with your redundancy packages, but ease up on the corporate spin about AI “augmenting” or “creating new jobs”. Excellence advice, but I fear exec, especially the old-school ones, will continue to do the opposite.