From Denial to Acceptance: Mapping the AI Adoption Journey
The grief cycle of AI adoption is real. Discover the three orders of consequences reshaping our world
Hi there,
The AI vibe is shifting, and today I’m going to share why that matters for AI adoption and how we choose to manage the societal consequences of this change.
The grief cycle is real.
Denial → Anger → Bargaining → Depression → Acceptance.
Every week, more people move from one stage to the next.
The grief cycle bleeds into the technology adoption curve.
What’s the difference in the end?
Eventually, even the biggest AI sceptics will entrust complex tasks to AI agents.
They will vibe with the AI model of their choice.
They’ll say thank you.
What Is The AI Vibe?
The AI vibe is an attempt at describing an emerging cultural trend.
Since leaving my corporate job in February 2025, I’ve spent hundreds of hours and thousands of dollars on everything AI-related.
Prompting, building, debugging, researching, analysing, writing, thinking.
Going hands-on with the AI tools is the best and most efficient way to learn.
At the same time I’ve been learning, the hype and rhetoric surrounding AI have skyrocketed.
Reflecting on the last 6 months, I’m more convinced than ever before that the potential of AI to improve outcomes across a range of domains is enormous.
Almost everything you can think of will improve, become faster, and become more affordable.
There will be societal consequences from this change, and that’s where a lot of the risk is sitting.
The AI vibe is that sense of social or cultural acceptability.
I can use AI. It’s OK to use technology. Use it safely and acknowledge the limitations.
The denial camp doesn’t even want to engage in measured discussion.
Blocked for AI reply. Blocked for AI image. Blocked for use of the em dash.
In a few years, this acting out from a place of fear will be laughed about.
Sometimes, there are flickers of reasonable push back. Sometimes.
I read some of the comments on my short videos.
AI is garbage. AI doesn’t work. AI always hallucinates.
It’s just repeating training data. It’s just predicting tokens. It’s not reasoning.
Fine.
Please show me your math olympiad problem-solving results.
If you don’t want to upskill and adapt, there’s no need to throw stones.
Why Is It Shifting?
Humans are complex creatures.
Managing change is more about how people respond to fear and their emotions than it is about objective reality.
The AI vibe is much more mechanical and formulaic than many people are used to.
It is shifting as more people get what they are looking for from AI tools.
They start to move through the grief cycle as they realise the societal consequences.
The differences in AI outputs over time don’t “feel right” to many people.
This means that even discussing AI triggers genuine and emotional responses.
Many things in life are inside our control.
Many things in life are beyond our control.
Focusing on what we can control and how we respond to change is the path forward.
There are no realistic policy solutions or credible political coalitions that can slow down this AI super-trend.
Your power to hold back the inevitable was always an illusion.
Your options are to carefully choose what models you use, how you share your data, and what you decide to keep for yourself and what gets pushed to the machine forever.
The change management required isn’t just for us in our use of AI.
It’s a broader societal change management program that is presently being ignored or dismissed as science fiction.
What Happens Next?
Every few months, AI models continue to improve.
They make fewer mistakes, they hallucinate less, they solve more complex problems.
The open-weight open-source models aren’t that far behind the frontier models.
This means that there is no reassembling the broken glass vase of AI capability.
Even if the frontier labs stopped work tomorrow, so many teams are working on these problems that it’s like an out-of-control rollercoaster.
We don’t know exactly where we’re going, what the safety levels of the ride are, and what might happen, but we are all locked in.
The more I’ve read, researched, and experimented, the more I realise how few people think beyond the immediate consequences of an action.
In my corporate life, consistently asking “and then what happens?” was a valuable skill that helped me drill down into complex problems.
There are at least three orders of consequences we should think about.
There are first-order consequences.
These are the direct and immediate results of an action.
There are second-order consequences.
These are the indirect effects triggered by the first-order consequences.
There are third-order consequences.
These are the longer-term ripple effects from system changes and feedback loops.
Take AI image generation, for example.
A first-order consequence is that I save time and money. I get a faster, cheaper, and likely better outcome.
The photographer loses out. In the short term, everything is fine - it’s just like any other technological change.
A second-order consequence is that distinguishing between what is real and what is fake becomes more challenging.
As most of the previous spending on photography shifts to AI tools, the market for human photographers responds, and the number of people able to self-actualise through monetising their passions or skills in this field declines.
A third-order consequence could be that, over time, figuring out what “real art” means becomes blurred. Those who are happy to generate AI images enjoy enormous consumer surplus.
At the same time, those who despair over the loss of human input to almost all photography can do nothing to reverse the trend. The divide between fake and real becomes so blurred that in-person communication becomes the only “safe” channel.
The difference between AI and previous technology waves is that the drive to minimise model loss becomes so mechanical and relentless that hardly any human will have the energy or incentive to push back and carve out areas for humanity to remain involved in many tasks across the breadth of modern life.
So What?
We can all make choices.
You can choose to ignore AI or maintain an open mind and develop your AI skills.
I’m going to keep focusing on this trend and helping people with AI strategy and transformation.
My take is that you lose nothing by being aware of and experimenting with the latest AI tools; in fact, you’re building up a better ability to sense-check claims.
There are enormous opportunities for business owners to start small with automating basic tasks and build up over time a much leaner, streamlined operating model.
If that’s something you’d be interested in talking to me about, you can book a free 30-minute call with me.
Next week, I’ll resume my long-form series on how firms will restructure themselves in the AI-era.
If you could let me know below if you’d like more articles like this one, that’d be great, so I can better plan my content for the next few months.
Regards,
Brennan



Well said Brennan, this is an important message and needs reiterated.