Johnny Butler

January 31, 2026

AI, TDD, and the Return of the Feedback Loop

Lately, working with AI has reminded me a lot of how Test-Driven Development felt when I first learned it.
Not the dogma.
Not the purity debates.
But the feedback loop.

What TDD was really doing (for me)

When I interview engineers and ask about TDD, I often hear the same answer:
“It helps catch bugs.”

That’s true but it was never the main benefit for me.
The real value of TDD was the speed of validation.
Writing a test forced me to answer a simple question early:
Do I actually understand what I’m trying to build?

If it was hard to describe the behaviour in a test, it usually meant:
  • the responsibility wasn’t clear
  • the abstraction was wrong
  • or the thing itself was too complex

Tests weren’t just about correctness, they were about clarity.
They applied design pressure early, when change was still cheap.

I was never dogmatic about TDD
I’ve never practiced TDD religiously.
Sometimes it made sense.
Sometimes it didn’t.
Context always mattered.

But one thing was non negotiable:
Anything that went to production needed solid test coverage.
Not because tests are virtuous but because they validate behaviour, protect intent, and make change safer.

That mindset hasn’t changed.

Why this didn’t work with AI before

This is the part that’s changed recently.
For a long time, AI just wasn’t fast or accurate enough to fit into this kind of workflow. The feedback loop was too slow, or too noisy, to be genuinely useful.
You’d spend as much time correcting output as you would writing the code yourself.

That broke the loop.

AI has brought that feeling back

Recent improvements have changed that.
AI is now fast and accurate enough to sit inside the iteration loop, rather than around it. Code appears quickly, often faster than I can fully reason about it upfront, which shifts the bottleneck away from typing and toward validation.

The questions become:
  • Is this actually what I meant?
  • Does this behave the way I expect?
  • Is this simpler than what I had before?

Iteration is cheap.
Feedback is fast.
And just like with TDD, when something is hard to validate, it’s often a sign the problem isn’t well-shaped yet.

I’m still driving
One thing I’m very conscious of:
I’m still in control.

AI might write a lot of the code, but I’m still:
  • defining behaviour
  • deciding what matters
  • validating outcomes
  • insisting on tests before production
  • owning the result

If anything, my role has moved up a level.
Less execution.
More judgement.

Old ideas, new leverage again
Good TDD was never about tests for their own sake.
It was about tight feedback loops, early validation, and design clarity.
AI hasn’t replaced that.
It’s made the loop fast enough and reliable enough to matter again.
When feedback is instant, unclear thinking shows up immediately.
When validation is cheap, complexity has fewer places to hide.

That’s a good thing.

Same responsibility, better feedback
AI doesn’t remove the need to think.
It doesn’t remove accountability.
It doesn’t decide what “correct” means.
What it does is make iteration cheap enough that clarity becomes the constraint again — just like when TDD was working well.

For me, that’s been unexpectedly refreshing.