The Three Stages of Losing Control to AI
Your board can't govern what it doesn't understand
We already know that algorithms can collude and form cartels to raise prices. This is no conspiracy, as even the Federal Reserve has warned about this as one of the risks associated with generative AI. This isn’t just a risk in trading systems - it’s a risk in every AI system, especially pricing systems.
Algorithms will maximise what they’ve been designed to do. A profit-seeking algorithm that identifies anti-competitive behaviour can exacerbate it. This isn’t by design; this is emergent behaviour.
Algorithms and AI are often intertwined, and machine learning is already widely deployed across the economy. We already live in a world of algorithms, but with the rise of generative AI, many more people are becoming concerned about its consequences.
The business built around AI, APIs and AI agents exists in a world of more unknown unknowns. Explainability and predictability of AI models seem complicated to take seriously when even leading AI researchers tell us that they don’t “really” know why AI works.
Today, we explore emergent AI strategy, where firms determine their business goals independently and identify the potential issues and opportunities associated with this approach.
The Three Stages of Losing Control
If you give an AI a goal: maximise efficiency, it can figure out ways to optimise for that goal that you may never have even thought of. Unless the guardrails of the task are clear, the rapid creation of new ideas can lead to disaster or a PR scandal.
There are three stages of losing control to AI. Stage 1 is where AI serves as a sidekick, assisting you with your work as a business owner. Stage 2 is where AI makes small decisions inside the boundaries you set. Stage 3 is where AI agents begin to devise their strategies and potentially collaborate for greater value or contribute to total societal destruction.
This is version 2.0 of the algorithm-driven world we already live in. Over time, an increasing number of businesses will be “run” by business owners partnering with an AI model of their choice. That model will be the primary input into the business owner's decision-making.
Over time, as model capability rises, more tasks and bundles of tasks get pushed to AI, and day-to-day oversight of AI agents declines. This is similar to how global corporations with operations in multiple countries operate - you need a lot of controls and internal audits to stay on top of how everything works; otherwise, there is an enormous hidden risk sitting in offshore teams.
Out of sight and out of mind. Until something goes wrong, just like institutional knowledge was lost to outsourcing and offshoring, so too will pushing more tasks to AI lead to knowledge loss. Already, the start of the career ladder has been eroded in many industries, which means fewer people will have the experience to solve problems at pace in a crisis.
The Evidence: Cartels, Hallucinations, Blackmail
When researchers put trading AIs in simulated markets, they colluded without instruction to fix prices. They didn’t need to be told - they just figured out that was the optimal strategy. The risks from generative AI in financial markets without some safety guardrails are clear.
Still, these markets are already highly automated; it would be impossible for almost all humans to sit down and figure out a day of high-frequency trading activity at their laptops without needing to resort to powerful data analysis tools.
The cases where, during AI safety testing, models deceive, hallucinate, or attempt to blackmail are a window into some of the edge-case risks from generative AI. This means that some of the process optimisations or value-enhancing ideas that emerge when AI agents are calling the shots could be pretty upsetting to a human - they’re not going to care about optimising for human factors.
You Can’t Govern What You Don’t Understand
I was going to make this section a 90-day action plan. However, I’ve been writing for the last few months that you need to take action in your business and start experimenting with AI transformation. Instead, here are three areas of questioning to work through in preparation for thinking about what an emergent AI strategy could look like in your business.
The first is facing reality. Where is AI impacting your business, or where could it impact it? The answer is everywhere. Customer support, operations, product, marketing, finance, sales, technology, and management decision making. It’s likely that if you haven’t rolled out secure ways of using AI, you already have a shadow AI problem. This means that AI is already at work beneath the surface in how your team operates and makes decisions.
Each part of a business needs someone accountable for the decisions and mistakes that AI tools make. Over time, as more tasks get pushed to AI tools, more dashboards, alerting, and monitoring will be required as a business becomes increasingly automated.
Network operation centres that look like mission control for cybersecurity monitoring and incident management already exist. Some firms will adapt this model for managing AI-led operations. It’s unclear whether “kill switches” are even possible for some systems, but thinking about how you go back through the AI-everything door if it’s a one-way decision is worth spending a lot of time getting right as a management team.
A major corporate governance challenge is emerging, as not enough individuals on boards of directors and management committees have enough direct, hands-on experience with AI tools. This means that poor decisions are more likely, especially if they believe that certifications from AI safety-type organisations are the required education (they’re not).
Your board needs direct, personal knowledge and an in-depth understanding of both the benefits and risks of AI, in equal measure, so that they can hold the CEO and executive team accountable.
Upgrading a business in the AI era is an existential challenge. The skills and experience that have brought you to this point are helpful. Still, the gap between corporate rhetoric and corporate reality regarding what AI transformation means is increasingly being revealed.
Just as you might automate a process in the pre-AI era, increasing AI automation and delegating decision-making to AI agents requires an upgrade in thinking and an adjustment in risk appetite. Some directors may not be comfortable with that; the consequence is that shareholder class actions in 5-10 years will examine all of this corporate decision-making today.
“What do you mean you thought AI was too risky? You were disrupted by a tiny team of three 22-year-olds with Claude Code and viral videos?”
Annual Planning Is Dead: Now What?
At the moment, you think you’re in control. Implementing AI governance structures, including monthly committee meetings and prepared meeting minutes, will calm the concerns of the dinosaurs.
Over time, more decisions are made by machines, because it will be considered a breach of fiduciary duty for a biased human to make a decision. I’m increasingly sceptical of the “human in the loop” residual - this will persist for extremely high-risk scenarios and any scenario where lobbying power can protect a guild or profession - but it’s just not going to be tenable as relative cost plummets and relative capability skyrockets.
The era of annual planning cycles and budgeting is already over. Genuine flexibility and the ability to adapt to changes and opportunities as soon as they arise will compel firms to embrace more AI-led change in their business. Task by task, function by function, industry by industry, more is going to the machine than we want to talk about in polite society.
There are two futures - one where competitors use AI strategies in ways that are hard to understand, and another where you figure out the areas where you’re comfortable with much more machine decision-making involved in every part of your business.
Find Your Blockers, Upgrade Your Business
Many business owners and leaders recognise the need to initiate their AI transformation, but are still stuck in the planning stage. Some have even tried automating workflows with AI agencies, but haven’t seen the results or ended up with agency support models that crumble the first time complicated tech challenges arise.
I’ve put together an AI Transformation Readiness Diagnostic to identify these blockers and identify quick wins in each category. If you’d like to work with me on AI in your business, you can book a complimentary 30-minute call with me. We will run the diagnostic and discuss strategies to overcome the blockers to AI transformation in your business. If you have any feedback, please reply to this email.
Regards,
Brennan



reduce risk appetite to zero. kill switches everywhere. the progression through the 3 Stages seems inevitable. and that's pretty unsettling.