Johnny Butler

March 11, 2026

Don’t just tell the model what good code looks like. Show it, then make it prove it followed it

Don’t just tell the model what good code looks like. Show it, then make it prove it followed it. LLMs already know plenty of patterns from the internet, but that also means they can pick up plenty of bad implementations too.

A lot of AI coding guidance is still too vague:
“use best practices”
“follow SOLID”
“keep it maintainable”

That helps, but only up to a point.

The real jump in quality comes when you give the model concrete examples from your own codebase, plus clear rules for when to apply them, how to apply them, and when not to.

I now have the agent run code construction through my playbooks.

That means it doesn’t just generate code and stop there. It also has to verify what patterns it applied, where it applied them, and why that was the right choice for that situation.

For example:
show it what good separation of concerns looks like
show it what a bad conditional tree looks like
show it your preferred shape for service objects, adapters, query objects, etc
show it the boundaries you want respected

solid-eimages.png


That gives you a much better chance of getting through PR, validation, and quality checks first time.

Less back and forth.
Faster development.
Lower token spend.

Generic principles are useful.
Concrete examples + explicit verification are where things start to get real.