e11

Granth · Sovereign research substrate

NotebookLM, on your own embeddings.

Granth is what we built when we wanted NotebookLM's ergonomics — ingest a corpus, ask grounded questions, cite back to source — but on embeddings we run, a database we own, and a query model the workspace controls. PDFs and ebooks in. Cited answers out. Promotion into Architect's memory.

Sovereign embeddings via the Dhara Ollama endpoint. BYOK query models. Your corpus is never the training data.

Surface signal

Status

LIVE

Embeddings

Sovereign

API

granth.eleven11.pro

Why this exists

Your reading list shouldn't fund the next model.

Every research-grade AI tool ships with a free tier and an opaque clause about how your inputs will be used. NotebookLM, ChatGPT-with-files, Perplexity Spaces — all excellent, all variations on the same trade: in exchange for the convenience, your corpus enters somebody else's training pipeline. Granth is what we built when we wanted the same workflow without the trade.

Same workflow. Different gravity. Your corpus stays where your trust lives.

Self-sustained by design

Owned, not rented.

Sovereignty in research isn't a feature flag — it's the difference between asking a question of your corpus and donating your corpus to a model.

01

Embeddings on your Ollama, not OpenAI

Every chunk and figure is embedded by an Ollama instance Eleven11 operates (`nomic-embed-text-v1`, 768d). Your corpus is never sent to a third-party model API — sovereign by default, not by configuration.

02

BYOK for query LLMs — never default to Anthropic

The query model is supplied by the workspace via `X-Workspace-AI-Provider`, `X-Workspace-AI-Model`, and `X-Workspace-AI-Key` headers. Missing key returns retrieval-only — there is no silent fallback to ours.

03

Per-workspace blob storage

Documents land at `<storage_root>/<workspace_id>/<document_id>/...`. Workspace-scoped paths, no cross-tenant access. Tested in `test_workspace_isolation.py`.

04

Auth before DB write

`require_workspace` depends on `require_api_key`. Unauthorized requests never upsert a workspace row — the security boundary is enforced upstream of any state change.

05

Soft-fail per chunk on embed failures

If an embedding fails, the chunk persists with `embedding=NULL` and recall narrows to text-search for that row only. No silent corpus loss. No all-or-nothing imports.

The primitive

Three records you can name. Recipe-driven.

Granth is built on a small, opinionated trio — a document for source, a chunk for recall, a figure for visual citation. Adding a new file format is a recipe module plus a register call. There is no central parser to refactor when shapes diverge.

01 · Source

Document

documents[]

A typed record per uploaded file. Recipe dispatches by `kind` — PDF today; ebook, audio, transcript on roadmap. Adding a format is a recipe module + a register call.

02 · Recallable unit

Chunk

chunks[]

Parsed text fragments with embeddings (`nomic-embed-text-v1`, 768d), positionally indexed, citable. The unit a query grounds on.

03 · Visual recall

Figure

figures[]

Extracted images and diagrams, embedded with their captions, returned alongside textual citations. Diagrams stop disappearing into the chunk gap.

How it fits the fleet

Where your reading compounds.

Granth is the layer between "your corpus" and "your workspace memory." What you read here grounds what Architect answers there.

architect

Granth citations promote into `memory_units` in one click. A research conclusion grounded in your corpus becomes a recall-visible artifact in the workspace's memory spine.

operator

Fleet-wide ops docs — runbooks, postmortems, vendor specs — ingest into Granth so the team can ask grounded questions without leaving the operator UI.

alerts

Every parse failure, every embed failure, every query emits a normalized event. The pipeline is observable from the same dashboard as the rest of the fleet.

kosh

Document metadata and citation indices land in Kosh tables — research findings join the same searchable substrate as runbooks and partner lists.

dhara

Provides the Ollama endpoint that powers Granth's embeddings. The same engine that audits attack surface also reads research corpora — sovereign all the way down.

BYOK contract

Every workspace chooses its own query model on its own key. We never see your inputs even at query time.

Surfaces & contracts

Six things you actually call.

Granth is API-first — four routes, two header families, a workspace-scoped contract. The smallest surface that does the job.

POST /v1/documents

Ingest

Upload a file; recipe dispatches by kind, parser persists chunks + figures.

GET /v1/documents/{id}

Document state

Fetch parsed state, chunks, figures, embedding progress.

POST /v1/notebooks/{id}/query

Grounded query

Query a notebook; returns answer + citations grounded in indexed chunks.

GET /v1/health

Health

Liveness check. The only public endpoint.

x-api-key + X-Workspace-Id

Auth headers

Every request gates on workspace API key plus workspace ID. Cross-workspace reads return 404.

X-Workspace-AI-*

BYOK headers

Provider + model + key. Missing key returns retrieval-only — never a silent fallback.

Senior engineering, visible

The proofs are in the substrate.

Five decisions visible in the recipe registry, the dependency graph, the migration discipline, and the deploy shape — not adjectives, design choices.

Recipe pattern, not a parser monolith

`RECIPES_REGISTRY: dict[kind → (name, callable)]`. Adding a new format is a module plus a register call. There is no central parser to refactor when file types diverge.

Auth before DB write

`require_workspace` depends on `require_api_key` in the FastAPI dependency tree. An unauthorized request never reaches an upsert.

Hand-authored idempotent migrations + monotonic journal

Same migration discipline as Architect. A new `.sql` without a `_journal.json` entry is silently skipped at boot — and the CI guard catches that gap before it ships.

Soft-fail granularity per chunk

An embedding failure on one chunk doesn't kill the document. The chunk persists with `embedding=NULL` and recall degrades gracefully to text-search for that row only.

Three processes from one image

API, parse worker, embed worker — separate concurrency stories, separate observability, one Dockerfile. The deploy shape is the same every time.

Who this is for

Teams who read carefully.

Granth earns its keep when your reading list is load-bearing for the work and you'd rather not pay for it twice.

Researchers who want NotebookLM's workflow without the corpus going into a training pipeline.
Engineering teams managing reference corpora — postmortems, vendor specs, operational runbooks — who need grounded query without leaking.
Compliance-heavy organizations where legal cites need to anchor in named documents, not paraphrased model output.
Architect users who want their reading to compound into the same memory their workspace already uses.
Companies where 'your data is the product' is a slogan they don't want to contribute to.

FAQ

Final friction, reduced.

How is this different from NotebookLM?

Same workflow shape; different sovereignty. Embeddings on your Ollama. Query model BYOK. Corpus on your storage. We don't see your inputs.

Can we use it without Architect?

Yes. The API is standalone — supply an API key plus workspace header and POST documents. Architect is the most polished consumer; not the only one.

What formats are supported today?

PDF in γ.1. EPUB, audio transcripts, and web archives are on the recipe roadmap. Adding a format is a recipe module plus a register call — not a fork of the parser.

What about figures and diagrams?

Extracted, embedded with their captions, returned alongside textual citations. Diagrams stop disappearing into the chunk gap.

Discuss Granth

Bring your reading home.

Granth is partner-deployed today — bundled with Architect or standalone. Talk to us about ingest scale, custom recipes, or how to plug it into an existing research workflow.

Direct line

Consultation requests stay owned. We reply from e11 after reviewing fit and timing.