The Deterministic Trap
Why We Fear the Very Intelligence We're Building - Blog #1 of The AI Disruptor Playbook Series
There’s something deeply human about the friction we feel with AI right now.
We built our organizations — and ourselves — on a beautiful promise: follow the right steps, in the right order, and you control the outcome. That instinct didn’t just build companies. It built cathedrals. It landed humans on the moon. It is, in many ways, the best of us.
And then AI arrived. And it doesn’t operate that way.
AI offers likelihoods, not laws. Possibilities, not proofs. It thinks less like a system and more like a wise, experienced human — weighing context, holding contradiction, sitting comfortably with ambiguity. For organizations built on precision and predictability, that feels deeply unfamiliar.
So we do what humans naturally do: we reach for the frameworks we trust. Stage gates. Governance structures. Pre-defined ROI thresholds.
A recent MIT study found that 95% of enterprise gen-AI pilots fail to deliver measurable impact — not because the models underperform, but because of integration, data, and governance gaps. The same instinct that makes us great managers — the drive to reduce variance, to predict, to control — is the very instinct that makes us poor stewards of adaptive intelligence.
But a different posture exists — and it’s moving faster.
Anyone who has spent time around Google’s engineering culture knows the joke: the chaos 10x engine. At any given moment, several independent teams are working on the same problem — no rigid process, no pre-defined ROI, no single owner waiting for permission. It sounds inefficient. It’s actually evolutionary. Rather than demanding a destination before departure, they set guardrails — not to eliminate chaos, but to make it safe enough to learn from. Experiments run loosely. Patterns emerge. Only then is discipline applied — not at the beginning to control outcomes, but at the end to harvest them.
That’s a fundamentally different relationship with uncertainty. One that treats ambiguity not as a risk to be eliminated, but as a signal to be decoded.
That shift — from controlling outcomes to cultivating conditions — is something I’m witnessing more and more in the leaders I work alongside every day. And it’s energizing.
The real question isn’t “how do we scale AI?”
It’s: Who do we need to become to think alongside it?
Because the transformation was never really about technology. It was always about us.

wei this one stayed with me probably because I have spent 20 years building exactly the mental models you are describing as the friction point. My adjustment is to recognize that the mental models is not obsolete rather misapplied.
Even the control instinct which you rightly call out can be reframed rather than abandoned. I look at the control aspect as changing your mindset from designing controls at every step of a process to applying them at the source. Set the principles, define the boundaries, establish what good looks like then let AI help the processes/workflows breathe.
The experienced leader who learns to use their frameworks/mental models as a lens for AI rather than a filter for it will adapt to AI and outperform people who never built those models in the first place. Would love to see you explore this in Blog 2 — I think there is a real case to be made for the experience advantage.