Temporal

Minimal temporal knowledge graph. Graphiti’s time-aware graph runtime stripped from 53,000 lines to 2,800.

Python 3.11+ License-Apache--2.0 Tests

Temporal is a knowledge graph that knows when things were true. It extracts entities and relations from text, tracks how facts change over time, and lets you query the state of the world at any point in history.

from temporal import retain, search

# Learn a fact
await retain("Alice joined Acme Corp in March 2024", store=store, llm=llm, embedder=embedder)

# Later...
await retain("Alice left Acme Corp in January 2025", store=store, llm=llm, embedder=embedder)

# Query: what's true now?
results = await search("where does Alice work?", store=store, embedder=embedder)
# → returns the January 2025 relation (Alice LEFT Acme), old relation is invalidated

No Neo4j. No graph database. No Docker. Just SQLite.


Architecture

flowchart LR
    E["Episode / document / message"] --> R["retain()"]
    R --> X["LLM extraction"]
    X --> N["Entities"]
    X --> L["Relations"]
    N --> Q["Resolution engine"]
    L --> Q
    Q --> S["SQLiteTemporalStore"]
    S --> H["search()"]
    V["Embedder"] --> H
    H --> O["Time-aware results"]

Temporal keeps one job: ingest facts, resolve canonical entities and relations, then return what is true now or what was true at a point in time.


Size comparison

Component Graphiti Temporal Reduction
Neo4j driver 2,785 LOC 100%
FalkorDB driver 2,883 LOC 100%
Kuzu driver 2,889 LOC 100%
Neptune driver 2,816 LOC 100%
All graph DB drivers 13,109 LOC 430 LOC (SQLite) 97%
LLM adapter + prompts ~3,000 LOC 417 LOC 86%
Types + interfaces ~2,000 LOC 402 LOC 80%
Total ~53,000 LOC ~2,800 LOC 95%

What was cut: Neo4j/FalkorDB/Kuzu/Neptune drivers, cloud graph DB orchestration, Pydantic v2 model layer, Langchain integration, REST API surface, Docker config, multi-tenancy abstractions.

What remains: the temporal logic — entity resolution, relation invalidation, time-aware search.


Install

pip install httpx  # only external dependency

Copy the temporal/ folder into your project. For embeddings, bring any function that returns list[float].

temporal/
├── types.py        # Data model (Episode, Entity, Relation, SearchResult)
├── interfaces.py   # Protocol definitions (LLMClient, Embedder, TemporalStore)
├── store.py        # SQLite implementation
├── llm_adapter.py  # OpenAI-compatible LLM client
├── prompts.py      # Extraction + resolution prompts
├── resolve.py      # Entity + relation resolution engine
├── retain.py       # Ingest pipeline
└── search.py       # Hybrid text + vector search

Quick start

import asyncio
from temporal import retain, search, SQLiteTemporalStore
from temporal.llm_adapter import OpenAICompatibleClient
from temporal.interfaces import Embedder

# Minimal embedder stub (replace with your embedding model)
class MyEmbedder:
    async def embed(self, text: str) -> list[float]:
        # Use OpenAI, Ollama nomic-embed-text, sentence-transformers, etc.
        ...

async def main():
    store = SQLiteTemporalStore("memory.db")
    llm = OpenAICompatibleClient(base_url="http://localhost:11434/v1", model="llama3.2")
    embedder = MyEmbedder()
    group_id = "user-123"  # partition per user/agent

    # Ingest facts
    result = await retain(
        content="Sarah is the CTO of Horizon Labs as of Q1 2025.",
        store=store,
        llm=llm,
        embedder=embedder,
        group_id=group_id,
    )
    print(f"Extracted {len(result.entities)} entities, {len(result.relations)} relations")

    # Later, a fact changes:
    await retain(
        content="Sarah left Horizon Labs in June 2025.",
        store=store,
        llm=llm,
        embedder=embedder,
        group_id=group_id,
    )

    # Search — old relation is invalidated, new one surfaces
    results = await search(
        query="Who leads Horizon Labs?",
        store=store,
        embedder=embedder,
        group_id=group_id,
    )
    for r in results.relations:
        print(f"{r.relation.source_entity_name}{r.relation.name}{r.relation.target_entity_name}")
        print(f"  fact: {r.relation.fact}")
        print(f"  valid_at: {r.relation.valid_at}  invalid_at: {r.relation.invalid_at}")

asyncio.run(main())

How it works

The temporal knowledge graph

Temporal stores three kinds of objects:

Episodes — the raw inputs (messages, documents, events):

Episode(
    id="...",
    content="Sarah left Horizon Labs in June 2025.",
    episode_type=EpisodeType.message,
    reference_time="2025-06-15T00:00:00+00:00",
    group_id="user-123",
)

Entities — named things extracted from episodes:

Entity(name="Sarah", entity_type=EntityType.person, summary="Executive, formerly CTO at Horizon Labs")
Entity(name="Horizon Labs", entity_type=EntityType.organization, summary="Tech company")

Relations — facts linking entities, with temporal validity:

Relation(
    source_entity_name="Sarah",
    name="LEFT",
    target_entity_name="Horizon Labs",
    fact="Sarah left Horizon Labs.",
    valid_at="2025-06-15T00:00:00+00:00",
    invalid_at=None,  # still true
)

When a new fact contradicts an old one, the old relation gets invalid_at set and a new relation is created. The graph stays accurate — you can query what was true at any timestamp.

Retain pipeline

input text
    ↓
Episode saved
    ↓
LLM extracts entities + relations from episode
    ↓
Resolve: match against existing entities (name + embedding similarity)
    ↓
Resolve: check if relation already exists (dedup or update)
    ↓
If contradicts existing fact → invalidate old relation, save new one
    ↓
Embed relation facts for vector search
    ↓
Save everything to SQLite

Hybrid retrieval — text match + embedding similarity, fused with RRF (Reciprocal Rank Fusion):

results = await search(
    query="Sarah's role",
    store=store,
    embedder=embedder,
    group_id="user-123",
    filters=SearchFilters(
        valid_at_start="2024-01-01T00:00:00+00:00",
        valid_at_end="2025-01-01T00:00:00+00:00",
        include_invalidated=False,  # only facts still true in that window
    ),
    limit=10,
)

Temporal filtering

Every query can be scoped to a point in time or a time window:

from temporal import SearchFilters

# Facts that were true in 2024
filters = SearchFilters(
    valid_at_start="2024-01-01T00:00:00+00:00",
    valid_at_end="2024-12-31T00:00:00+00:00",
    include_invalidated=False,
    include_expired=False,
)

# Only look at specific relation types
filters = SearchFilters(relation_names=["WORKS_AT", "LEADS", "FOUNDED"])

# Filter by entity names
filters = SearchFilters(entity_names=["Sarah", "Alice"])

SQLite schema

episodes       -- raw source content with reference timestamps
entities       -- named things with name embeddings
relations      -- facts between entities (valid_at, invalid_at, expired_at)
episodic_links -- provenance: which episode produced which entity

WAL mode enabled. Vector similarity computed in Python — handles up to ~10k relations without issue. For larger graphs, swap the TemporalStore protocol with a vector DB backend.


Tests

# Unit tests (no LLM required)
python3 -m pytest tests/ -q --ignore=tests/test_e2e.py

# End-to-end test (requires an LLM endpoint)
OPENROUTER_API_KEY=sk-... \
  python3 -m pytest tests/test_e2e.py -v -s

155 tests covering types, entity resolution, relation invalidation, temporal filtering, hybrid search, retain pipeline.


What was removed from Graphiti

Temporal is a targeted extraction of Graphiti’s temporal logic, not a port of the full platform:

The temporal validity model (valid_at, invalid_at, expired_at), the entity resolution algorithm, and the extraction prompts are preserved.


Part of a suite

Temporal pairs naturally with:


Requirements


License

Apache 2.0. See LICENSE.


Acknowledgments

The temporal knowledge graph design, entity resolution logic, and valid_at/invalid_at model come from Graphiti by Zep AI (Apache 2.0). Temporal is an independent extraction — not affiliated with Zep.

Extracted from Graphiti’s temporal knowledge-graph design. Not tracking upstream; this is a stable standalone extraction, not a rolling fork.