A breakthrough in the conceptual architecture this week. I have been thinking about how to structure Tessera’s internal representation of my decision history, and the answer is not a timeline. It is a lattice.
Why Timelines Fail
A timeline tells a story. First this happened, then that happened. But that is not how judgment works. When I face a new situation, I do not think “what did I do last Tuesday?” I think “where have I seen this pattern before, across any domain, at any time?”
A decision I made about legal risk in 2008 might be the most relevant precedent for a security architecture decision in 2024. Not because the domains are related, but because the underlying structure of the tradeoff is identical: irreversible commitment under uncertainty with asymmetric downside.
The Lattice Structure
Every decision becomes a node in a graph. Edges connect decisions that share structural properties: similar risk profiles, similar constraint sets, similar stakeholder dynamics, similar outcome patterns. The graph is not organized by time or domain. It is organized by the shape of the decision itself.
This means Tessera can traverse from a current problem to relevant precedent across domains and decades, following the structural similarity rather than the surface similarity. “This feels like that time with the vendor contract” becomes a formal graph traversal rather than a vague human intuition.
Why This Is Different From Every Other Personal AI
Most personal AI attempts are really sophisticated search engines. They find text that matches a query. Tessera is designed to find decisions that match a situation. The unit of retrieval is not a document or a passage. It is a decision moment with its full context: signals, constraints, alternatives considered, choice made, and outcome observed.
A polymath without internal consistency produces noise. A polymath with documented consistency produces a lattice. I have the consistency. Now I need to build the lattice.