Johnny Butler

March 12, 2026

“Whatever mess AI gets us into, AI will get us out of.”

“Whatever mess AI gets us into, AI will get us out of.”
I hear that assumption a lot in business conversations.
It hasn’t been my experience so far.

The question I keep asking is:

How do we get businesses to take AI-driven software risk seriously before they have to feel the consequences themselves?

Unless you’ve lived through the fallout of fragile, revenue-critical systems, it’s easy to underestimate how serious this is. The people who haven’t felt that pain firsthand are less likely to put proper guardrails in place early enough.

What AI changes is the speed.

Before, poor engineering decisions often caused damage more gradually. There were warning signs. You had more chance to notice things drifting and react before the problem spread too far.

Now the pace is different.

AI can massively increase the rate of change. That is useful, but only if the surrounding system is built to absorb it. If the foundations are weak, you are not just moving faster, you are increasing the rate at which instability enters the codebase.

In startups, this is even more critical. One bad architectural or product decision too early, made faster and repeated more often, can create damage that is disproportionately hard to recover from. In some cases, it is the kind of thing that genuinely puts the business at risk.

I also keep hearing that whatever mess AI creates, AI will be able to fix.

That hasn’t been my experience so far.

Maybe that changes over time. But right now, AI works far better in systems with clear boundaries, cleaner code, established patterns, and strong verification.

That helps humans move faster.
It helps agents move faster too.

Clean code and guardrails are not less important in the AI era.
They are more important.

So I keep coming back to the same question:

How do we get the business to take this seriously before it has to learn the lesson through production pain?