Tessera must know what she does not know. This week I built the confidence calibration system, and it is one of the most important architectural decisions in the project.
The Problem With Confident Systems
Most AI systems are uniformly confident. They present every output with the same authority whether they are drawing on dense precedent or extrapolating from nothing. This is dangerous in a decision support system. If Tessera presents a recommendation with high confidence when the precedent is thin, I will trust it more than I should. If she presents a recommendation with low confidence when the precedent is rock solid, I will waste time second-guessing good guidance.
How Confidence Is Computed
Tessera’s confidence reflects three factors. Precedent density: how many structurally similar decisions exist in the lattice. Outcome coverage: what percentage of those precedent decisions have documented outcomes. Retrieval convergence: whether graph traversal, vector search, and lexical matching agree on the relevant precedents.
High confidence requires all three: dense precedent, documented outcomes, and convergent retrieval. If any factor is weak, confidence drops. If two are weak, Tessera flags the response as speculative and recommends further investigation.
Behavioral Alignment
The calibration maps to my actual behavior. In domains where I have handled hundreds of similar situations, I am decisive. Tessera should be too. In novel territory, I slow down and ask more questions. Tessera does the same. In genuinely unprecedented situations, I explicitly flag uncertainty and escalate. Tessera mirrors this.
The goal is not artificial humility. It is calibrated honesty about the limits of available evidence. That is how I operate, and Tessera must operate the same way to be trustworthy.