When teams start writing agent instructions, there’s a very common instinct:
Make the instructions “safe” by turning them into a big set of deterministic rules.
It usually starts with something small, often around wording.
For example:
Make the instructions “safe” by turning them into a big set of deterministic rules.
It usually starts with something small, often around wording.
For example:
- if a delivery date is in the past, say “was due”
- if a delivery date is in the future, say “expected”
- if it’s today, say something else
- if the date is missing, choose a fallback phrase
It looks disciplined. It feels like you’re removing ambiguity.
But this often creates a brittle system, because you’re rebuilding a rules engine in English.
And the more rules you add, the more you have to debug the rules.
Why this goes wrong
LLMs are actually good at context-aware language. They can choose phrasing based on tone, surrounding context, and what the user is likely to infer.
When you over-specify wording with if/else trees, you tend to create problems like:
Why this goes wrong
LLMs are actually good at context-aware language. They can choose phrasing based on tone, surrounding context, and what the user is likely to infer.
When you over-specify wording with if/else trees, you tend to create problems like:
- odd edge cases (“today” becomes weird)
- contradictions between rules (“past date” rule conflicts with “cancelled order” rule)
- hard-to-maintain instructions that grow with every exception
- outputs that feel robotic because you’re forcing phrasing rather than intent
In other words: you make the instruction set complex in exactly the same way legacy code becomes complex.
A better pattern: intent + constraints + format
A better pattern: intent + constraints + format
What tends to work better is a pattern that’s very familiar to engineers:
- specify intent (what we are trying to achieve)
- specify constraints (what must be true / what must never happen)
- specify format (how the output should be structured)
- let the model choose the wording
- add deterministic rules only for known failure modes you’ve actually observed
This keeps the instructions maintainable, and it uses the model for what it’s good at: language.
Where deterministic rules do make sense
This doesn’t mean “no rules”.
It means rules should be targeted and justified.
Rules are useful when you have a specific failure mode that causes real harm, for example:
This doesn’t mean “no rules”.
It means rules should be targeted and justified.
Rules are useful when you have a specific failure mode that causes real harm, for example:
- the model implies certainty when a date is only an estimate
- the model invents a date when one isn’t present
- the model promises a delivery time that operations can’t guarantee
- the model uses wording that confuses customers (“due” vs “expected” in a way that implies blame)
In those cases, write the smallest rule that prevents that failure.
For example, instead of a full tense engine, you might add constraints like:
For example, instead of a full tense engine, you might add constraints like:
- if a date is uncertain, explicitly say it’s an estimate
- never imply the customer missed something
- don’t use language that suggests a promise unless the system state is confirmed
- keep date phrasing consistent and clear
Then let the model handle the rest.
A concrete example: “expected” vs “was due”
We hit this exact discussion when writing instructions for customer-facing order updates.
There was a temptation to hardcode wording rules depending on whether the delivery date was past or future.
It’s a reasonable instinct — but the key is to avoid turning it into a growing decision tree.
A concrete example: “expected” vs “was due”
We hit this exact discussion when writing instructions for customer-facing order updates.
There was a temptation to hardcode wording rules depending on whether the delivery date was past or future.
It’s a reasonable instinct — but the key is to avoid turning it into a growing decision tree.
The pragmatic approach is:
- define the intent (be clear, reduce anxiety, don’t over-promise)
- define constraints (don’t hallucinate, don’t contradict order state)
- define format (short, scannable, consistent)
- only add a rule if you see a specific repeated problem in practice
If you include the screenshot here, it’s a great illustration of what “over-determinising” looks like and why it’s tempting. (Just crop/redact anything sensitive.)
Agent instructions should not become a brittle if/else rules engine.
Start with intent, constraints, and output format. Let the model do what it’s good at. Add deterministic rules only for failures you’ve actually observed.
That keeps the system reliable and maintainable as it evolves.