Johnny Butler

April 22, 2026

Good Agentic Development Looks a Lot Like Good Software Engineering

People talk about agentic workflows as if they need a completely different set of principles. I think the opposite.

The more I work with coding agents, the less I think the winning teams will be the ones with the cleverest prompts or the strongest models. I think they will be the teams that understand software engineering discipline deeply enough to make agents work inside it.

Agentic development is not a new category. It is software engineering with the discipline made more explicit, the workflow made more executable, and the verification pushed much closer to the work itself.

Good engineers already work this way.

When a good engineer is moving quickly on something that matters, they do not usually try to solve the whole thing in one pass. They narrow the problem. They work in bounded slices. They make the intended outcome explicit before they start. They get feedback early. They verify as they go. They use that feedback to decide the next move.

Testing is part of that, but not only because it catches regressions. Real verification also shapes better design. If something is awkward to test, that is usually telling you something. The design is too tangled. The responsibilities are mixed together. The surface is too broad. The boundaries are not clean enough. When verification is real, the software usually gets cleaner too.

That is not ceremony. That is how quality survives speed.

Agents need exactly the same pressure.

If you hand an agent a large task and basically say "go deal with it", that is not a serious workflow. Sometimes you will get something impressive back. Sometimes it will even work. But it will also drift. It will do too much. It will solve the wrong version of the problem. It will keep going past the point where a good engineer would stop. It will sound more certain than it should.

That is not because agents are unreliable by nature. It is because the shape of the work is bad.

Humans are not great in that shape either. The difference is that agents can move through bad workflow much faster than people can. A process that is only a bit loose with a human becomes a real risk with an agent.

So the job is not just to write better prompts.

The job is to build a workflow where the right engineering behavior is the easiest behavior for the agent to follow.

The slice has to be bounded. The intended outcome has to be clear. The acceptance criteria have to be visible. The verification path has to be built into the slice. The stop conditions have to be real. And the agent cannot just be asked to build — it has to be asked to prove.

Once you work that way, something becomes obvious quickly. Agents are not random. They respond hard to structure. Loose workflow makes them look flaky. Tighter workflow makes them look much more consistent.

After running thousands of jobs, that has been one of the clearest lessons. I did not get closer to the result I wanted by trusting the models more. I got there by trusting the workflow more. By building a system that keeps forcing the same loop I would want from a strong engineer: reduce the problem, keep the change bounded, verify the slice, surface tradeoffs honestly, and stop when confidence runs out.

Once that loop is real, the output feels very different. It stops feeling like a clever demo. It starts feeling like a production system.

The teams that will win here are not the ones with access to stronger models. They will be the teams that understand software engineering well enough to make agents work inside it. Teams that know how to break work down. Teams that treat verification as a design tool, not just a gate. Teams that make evidence travel with the work.

That is when agentic development stops being interesting in theory and starts being useful in practice.

If you want to see the public proof surface behind that idea, start here: exploremyprofile.com/dark-factory