Johnny Butler

February 14, 2026

Engineering Interviews Are Changing in the AI Era

Engineering interviews have always been a proxy.

We can’t fully simulate real work in an hour, so we use exercises and questions to approximate signal:
  • how someone thinks
  • how they trade off speed vs quality
  • how they handle uncertainty
  • whether they validate and de-risk
  • whether they communicate clearly

The final code someone produces was never the whole point. It was a clue. The real signal was the thinking that led to it.

AI makes that even more true.
Because AI lowers the cost of producing decent-looking code quickly. If you judge candidates mainly on the final artifact, you’ll start selecting for “who can drive the tool to output something plausible”, rather than “who can build reliable systems”.

So what changes?
The code becomes less diagnostic. The process becomes the signal.

What you want to assess now
In an AI-assisted world, the highest-signal questions are about judgment and workflow:
  • can the candidate frame the problem clearly?
  • can they state assumptions and constraints?
  • do they choose a sensible approach without over-engineering?
  • do they recognise risk and edge cases?
  • do they validate behaviour (tests, examples, invariants)?
  • do they know when to stop and ask for clarification?
  • do they notice when the model is confidently wrong?

In other words: can they use AI as a tool without outsourcing responsibility?

How interviews might evolve (practically)
I don’t think the answer is “ban AI” in interviews. In most real roles, engineers will use it anyway.
A better approach is to design interviews that make the workflow visible.

For example:
  1. “Build with AI, but narrate your decisions”
  2. Let the candidate use AI, but ask them to explain what they’re asking for and why. The goal is to see judgment, not typing speed.
  3. “Here’s an existing codebase problem — de-risk it”
  4. Instead of greenfield puzzles, give them a small messy snippet or failing behaviour and ask them to improve it safely. This surfaces how they validate and manage risk.
  5. “Spot the risk” prompts
  6. Give them an AI-generated solution (intentionally imperfect) and ask them what could go wrong. This tests review ability, edge case thinking, and practical paranoia.
  7. “Make it safe”
  8. Ask for the rollout plan: feature flags, monitoring, rollback, migrations, performance considerations. This is where seniority shows up fast.

What the transcript reveals
If you allow AI in an interview, the candidate’s interaction with the model becomes evidence:
  • do they give clear constraints or vague prompts?
  • do they ask for alternatives or accept the first answer?
  • do they test early and iterate until green?
  • do they notice contradictions?
  • do they ask the model to identify risks and failure modes?
  • do they do a clean re-implementation after a spike?

Two candidates can end up with similar code. The transcript shows who you can trust.
A simple rubric that works
If I had to reduce it to a few axes, it would be:
  • problem framing (clarity, assumptions, scope)
  • trade-offs (simplicity vs flexibility, speed vs safety)
  • validation (tests, examples, invariants)
  • risk awareness (failure modes, rollout, rollback)
  • communication (can they explain what they’re doing and why)

AI doesn’t replace these skills. It amplifies them.

Engineering interviews are shifting from “can you produce code under pressure?” toward “can you produce reliable outcomes with modern tools?”
In an AI world, the code is output.

The process — the decisions, the validation, the risk management — is the signal.