The knowledge graph is Tessera’s nervous system. Vector embeddings capture what things mean. The lexical index captures what things say. The graph captures how things relate. And relationships, not content, are what make a life assistant possible.

I spent three weeks on graph schema design before writing a single line of ingestion code. The schema determines what Tessera can reason about. Get it wrong and no amount of data fixes it.

Node Types

The graph has twelve node types, refined through iteration: Person, Organization, Technology, Decision, Outcome, Project, Incident, Document, Concept, Commitment, Location, and TimeFrame. Every artifact in the corpus is decomposed into instances of these node types during enrichment.

A single email might create or update nodes for the sender (Person), the client (Organization), the technology discussed (Technology), a decision communicated (Decision), the expected result (Outcome), and the project context (Project). The email itself is a Document node that links to all of them.

The richness of the graph comes from density, not size. A corpus with two hundred thousand artifacts produces a graph with four hundred thousand nodes because each artifact contributes to multiple node types. The edges between those nodes, over a million of them, encode the relationships that make retrieval intelligent.

Edge Types and Weights

Edges are typed and weighted. The types include: authored, decided, affected, preceded, followed, contradicted, confirmed, escalated, resolved, recommended, implemented, and evaluated. Each edge type has a weight that reflects the strength of the relationship.

Weights are not static. When a decision node is connected to an outcome node and that outcome later proves to be positive, the weight of the “decided → outcome” edge increases. When an outcome is negative, the weight adjusts to reflect that. Over time, the graph encodes not just what happened but how well things worked.

This is the mechanism that enables Tessera to say: “The last time you used this approach with a similar client, the outcome was suboptimal. Here is what you did differently in a case that worked better.” The graph knows because the edges remember.

Graph Traversal for Remediation

Technical remediation queries produce some of the most interesting graph traversals. A query about a failing Exchange environment traverses from the Technology node (Exchange) through Incident nodes to Decision nodes, then through Person nodes to find who was involved, then through Outcome nodes to find what worked.

The traversal depth is bounded, typically three to four hops, to prevent the combinatorial explosion that makes unbounded graph search useless. Each hop is filtered by edge type relevance: a remediation query follows “resolved,” “escalated,” and “implemented” edges preferentially over “authored” or “evaluated” edges.

The result is a subgraph: a small, relevant slice of the full knowledge graph that contains the context needed for the current question. That subgraph, serialized into text, becomes the context that the language model uses to generate its response. The model never sees the full graph. It sees only the curated subgraph that the retrieval system determined was relevant.

This is the key insight: the intelligence is in the retrieval, not the generation. A good retrieval system paired with a mediocre language model will outperform a mediocre retrieval system paired with the best language model available. I am betting the architecture on this principle.