Designing for Drift

Why we treat AI like a calculator, and what happens when we do.


Recently, I shared a story about a brewery. It was a small fable about artisans who tried to remove every bit of variation from their process, only to discover that the “drift” was the source of its character. They achieved perfection, and lost the product.

Today, I want to talk about the world outside that story. Because we are starting to do something very similar with AI.

The Calculator Trap

There are systems where the output changes every time you run them. Not because something is broken, but because that is simply how the process works.

We understand this in many domains: agriculture, weather forecasting, large-scale manufacturing. Yet when it comes to AI, we keep acting surprised.

Same input. Different output. And suddenly, it feels like a betrayal.

“Why can’t it just give me the right answer?”

This reaction is natural. It comes from a habit we have built over decades. We are used to calculators.

Excel is a calculator: same input, same output, 100 times out of 100. Accounting software is a calculator. It never guesses. Much of our digital infrastructure is built on this deterministic promise.

So when we see an input box on a screen, whether it is ChatGPT or a customer-service bot, we instinctively expect the same behavior. We expect 0 or 1. We expect a single, stable truth.

But AI is not a calculator.
It is an engine of drift.

It works with probabilities, context, and interpretation. Things that, by their nature, refuse to stay perfectly still.

Drift Is Not a Bug. It’s a Property.

In real engineering, drift is not a new idea. Outside of software demos, engineers rarely deal in absolutes.

A reverse-osmosis membrane never removes exactly 100% of impurities. Sensors drift. Materials age. Models decay.

Consider semiconductor manufacturing. Even in the most controlled environments on Earth, a fabrication line never produces perfectly identical chips. Performance appears as a distribution across a range.

Engineers do not respond by smashing the system. They treat the variance not as a failure, but as something to be managed.

This is what engineering really is: not the elimination of variation, but the placement and management of it.

The Cost of Forcing Stillness

When we try to force AI into the role of a calculator, we are asking it to stop interpreting.

To eliminate drift entirely, we would have to constrain the system so tightly that it stops behaving like a form of intelligence at all. It would become predictable, stable, and safe. But also narrow, rigid, and lifeless.

In the brewery story, the artisans eventually succeeded. They eliminated the drift. The process became perfectly stable. And in doing so, they lost the very thing that made the product alive.

We are approaching a similar decision point with AI. If we demand perfectly stable answers in every situation, we may succeed. But we might also lose the very quality that makes it useful: its ability to interpret, suggest, reframe, and surprise.

Designing With Drift

The real question is not: “How do we make AI perfectly stable?”

It is: “Who is responsible for interpreting the drift?”

If the answer is “the model,” disappointment is inevitable. If the answer is “the human, supported by the system,” then real design work begins.

That means:

  • Deciding where variation is acceptable.
  • Deciding where determinism is required.
  • Designing human checkpoints where interpretation matters.
  • Treating AI not as a magic wand, but as a material with yield, tolerance, and character.

Because sometimes, what looks like imperfection is simply the signature of a living system.