e11

Operator · The admin plane for the fleet

Eight services, one bearer, no stale dashboard.

Operator is the admin plane Eleven11 itself uses to run pr, cal, alerts, outreach, discovery, dhara, mail, and web. Every other tool publishes a resource and widget contract; Operator aggregates them behind a single UI. No database, no caching layer — every page is a server component that fetches the upstream API on render.

Available to partners running the Eleven11 stack under their own brand. Single-tenant, env-backed login, dark by design — the control plane is yours and runs on your infrastructure.

Surface signal

Status

PARTNER

Storage

None

Workspace

operator.eleven11.pro

Why this exists

Eight services. One bearer. No stale dashboard.

An admin plane that scrapes its data into a local database becomes a liability the moment any of those upstream services move. The reading is wrong, the action button is wrong, and the operator who trusted it is wrong. Operator was built so the page you are looking at cannot be wrong: there is nothing to be stale, because there is nothing stored.

Every Operator route is a server component that opens fresh fetches against the upstream APIs at render time. The aggregation is the UI; the source of truth stays where it lives.

Self-sustained by design

Owned, not rented.

An admin plane handles the most sensitive operations a team performs. It should be the last thing on a third-party tenant — not the first.

01

No database. None.

Operator has no Prisma, no Postgres, no SQLite of its own. There is no schema to migrate, no rows to back up, no cache to invalidate. The control plane carries no state of its own — only a signed session cookie that proves you are allowed to ask the upstream services questions.

02

No third-party admin SaaS.

Retool, Forest, Appsmith, or anyone else's hosted admin product would mean every credential, every customer record, every alert payload passing through somebody else's tenant. Operator is a Next.js app on your e11-edge network, talking to your services on internal addresses. Nothing leaves the box.

03

One bearer, eight services.

pr, cal, alerts, outreach, discovery, dhara, mail, web — each one already has its own API key and base URL. Operator holds those keys in its server .env, gates every call behind requireOperatorSession, and never exposes them to a browser. NEXT_PUBLIC_* is empty on purpose.

04

Dark by design.

Open the network tab on Operator and you will see fetches to one host: itself. Every upstream call happens server-side, between containers on the e11-edge bridge. The browser never learns where pr-api lives, what its key looks like, or that discovery has a separate admin token.

05

Yours. End to end.

Partners running Eleven11 stacks under their own brand get the same Operator, on their own infrastructure, with their own credentials. No phone-home, no telemetry to us, no shared admin tier. The control plane is yours and runs on your iron.

The primitive

Three layers. Strictly separated.

Operator's architecture is small and aggressively boring. Two files per upstream service, one render layer that consumes them, one auth gate enforced at the boundary. Adding the ninth integration looks identical to the first.

λ.1 · Fetch layer

lib/<svc>

lib/pr.ts · lib/cal.ts · lib/dhara.ts

The thinnest possible upstream client. Reads <SVC>_API_BASE_URL and <SVC>_API_KEY from process.env, exposes <svc>Fetch(path, init), <svc>Configured(), <svc>BaseUrl(). No types, no session check — that is the next layer's job. Importable by health probes that need to know whether a service is wired without claiming a session.

λ.2 · Session-gated client

server/operator/<svc>

import "server-only";

Every exported function calls await requireOperatorSession() before it touches the network. Returns typed data the page.tsx files can render directly. Consumed by route components and server actions only — never imported into anything that ships to the browser.

λ.3 · Render layer

(operator)/<route>/page.tsx

Server components

Each route is a server component that calls the session-gated client, awaits the fresh fetch, and renders the result. There is no useEffect refetch, no SWR, no client cache. If you want the latest number, you reload the page — and you get it, every time.

How it fits the fleet

Every service publishes. Operator aggregates.

Operator is not eight admin panels stitched together — it is one UI in front of eight services that already know how to be administered. The aggregation is what unlocks the fleet behaviours, like Dhara's findings shaping discovery's intelligence view, or alerts replay walking the route table fed by every producer.

pr

Model pools, API keys, effective config, stage smoke tests. Run history at /runs, publish queue at /publish, audience profiles, fact bundles, artifacts, and the pipeline audit log — all served from pr-api on render.

cal

Calendar hub aggregates ICS and CalDAV sources, sync history, the main calendar view, per-client calendars, and utility operations. The cal-api is the truth; Operator is the UI that lets a human reach into it.

alerts

Hub overview with health and counters, channels CRUD with inline test, routes ordered by priority, producer API key issuance, paginated event history with filters, and per-event replay. Two bearer levels — producer key and admin token — kept distinct.

outreach + discovery

Outreach renders the pipeline funnel, campaigns, prospects, and stage transitions. Discovery shows targets with overview stats, per-target jobs and findings and tech inventory, and the cross-target intelligence view fed by Dhara's flywheel.

dhara + mail + web

Dhara jobs, schedules, and shares aggregated on a single page with a streaming progress form. Mail surfaces link out to the platform admin. Web exposes admin sync for the PR ingest secret. Each one wired with its own API key, gated by the same session.

secrets rotator

/secrets reads the canonical SECRETS.md registry, dry-runs against every installed_in path, requires a typed confirmation, then atomically rewrites .env files and restarts consumers. The control plane that operates the control planes — typed, audited, and rolling its own JSONL log.

Surfaces & contracts

Six routes worth naming.

Operator's URL space tracks the modules it aggregates, but a small number of routes carry the load. Each one is a server component, no client cache, no stale state.

/platform

Module overview

Home after login. Lists every aggregated module, its configured-or-not status, and a jump card. Root / redirects here. The map of the fleet from one screen.

/health

Cross-service health

Calls every upstream's health endpoint in parallel, shows up-or-down, latency, and which env vars are present. The first place to look when something downstream is misbehaving.

/connections

Webhook + signing settings

Cross-service webhook URLs, HMAC secrets, and signing settings. /settings is an alias that redirects here. The wiring map between services lives at one URL.

/secrets

Secrets rotator

Rotation UI for self-generated secrets. Reads operator/config/secrets.yaml, dry-runs, requires the secret id typed verbatim, atomically rewrites every installed_in .env, restarts consumers, logs a SHA-256 of the byte source. Never the value.

/alerts/events/[id]

Event detail with replay

A single event with its full payload, every delivery attempt, and a Replay button that walks the route table again. The smallest surface that lets you debug a misrouted alert.

/discovery/intelligence

Cross-target patterns

Portfolio risk ranking and four pattern panels — vulns, tech, exploited, CVE. Reads the discovery knowledge graph that Dhara's flywheel keeps fed. The first surface where the fleet behaves like a fleet.

Senior engineering, visible

The proofs are in the substrate.

Five decisions visible in the file layout, the compose file, and the rotator's confirmation flow — not adjectives, design choices made before the substrate was poured.

Two-layer client, enforced.

lib/<svc>.ts holds the fetch shape; server/operator/<svc>.ts holds the auth gate and the types. server-only is at the top of every gated module. The split lets a layout's health probe call <svc>Configured() without a session, while every actual data fetch passes through requireOperatorSession first.

Optional everything at build time.

If a module's API key is unset, the page renders a typed not-configured panel rather than crashing the build. Add a new integration by creating two files and one .env block — no rebuild, no migration, no rollout choreography.

Root inside the container, on purpose.

user: "0:0" + security_opt: label:disable + group_add: 995 are not an oversight. The /secrets rotator writes to a host-mounted matrix of stack .env files owned by different host users and restarts containers via the docker socket. Bounded scope, deliberate trade — documented in CLAUDE.md so the next operator does not 'fix' it.

Dry-run is the only path to rotate.

operatorExecuteRotate refuses to run without the single-use confirmId issued by the most recent dry-run, tokens expire in five minutes, the modal disables the execute button until the operator types the secret id verbatim. No auto-rollback — partial failures are surfaced honestly, because lying about state is worse than the failure.

HUMAN-AUTH ready, not migrated.

The current env-password + HMAC session cookie is a known checkpoint, not a permanent shape. Phase 5 of HUMAN-AUTH swaps lib/operator-auth.ts for the @e11/cf-access middleware in the same slot. The migration was scoped before any code shipped — that is what it means to know where you are going.

Who this is for

Operators who refuse to be lied to.

Operator earns its keep when the cost of an admin plane disagreeing with the upstream service starts to exceed the cost of running one that doesn't store anything.

Partners running Eleven11 stacks under their own brand who need an admin plane that is theirs end-to-end, on their iron, on their domain.
Operations teams who have been burnt by an admin tool whose cached numbers disagreed with the upstream service during an incident.
Engineering leaders who refuse to put their producer keys, customer rows, and webhook secrets through a third-party SaaS console, regardless of its SOC report.
Founders who want to add a ninth or tenth internal service and not pay an admin-platform tax to expose it — two files and an env block.
Anyone already running a multi-service backend who has accepted that the admin plane is part of the product, not an afterthought, and would like that part to behave like the rest.

FAQ

Final friction, reduced.

Is Operator self-serve?

No. It is partner-tier on purpose. Operator is the admin plane Eleven11 itself uses to run the fleet, and is offered to partners deploying the stack under their own brand — not as a generic admin product. The deliberate scope is what makes it small and honest.

Why no database?

Because the moment Operator stores its own copy of pr's runs, alerts' events, or discovery's findings, those copies start to drift. Every page is a server component that fetches on render against the upstream API. The reading is correct because there is nothing to be stale — and the substrate to break is one less thing.

How does adding a new internal service work?

lib/<svc>.ts (low-level fetch, env-gated), server/operator/<svc>.ts (session-gated typed wrappers, server-only at the top), one re-export, one .env block on the server, push to main. The route is a server component that imports from server/operator and renders. No build-time coupling between modules.

What is the auth path long term?

Today: env-backed login plus HMAC session cookie via OPERATOR_SESSION_SECRET. The host already routes through the eleven11-main Cloudflare tunnel as pure transport — Phase 5 of HUMAN-AUTH attaches CF Access policies and swaps lib/operator-auth.ts for the @e11/cf-access vendored middleware in the same slot. Same shape, different gate.

Discuss Operator

One UI for the fleet you already run.

Operator is partner-tier — deployed alongside the rest of the Eleven11 stack on your infrastructure, under your brand. Talk to us about the fleet you want to administer from one screen.

Direct line

Consultation requests stay owned. We reply from e11 after reviewing fit and timing.