e11

Cal · Self-hosted calendar sync hub

Your schedule, on a CalDAV server you own.

Cal is what we built when we wanted a calendar substrate that ingests Google, ICS, and CalDAV — normalizes them into typed utility buckets — and exposes outbound CalDAV your phone, desktop, and automations already speak. Vendor at the ingress, standard protocol on the way out, your data in your Postgres.

Live at cal.eleven11.pro. Ten tables, four widgets, one Radicale on the edge. No integration vendor in the path.

Surface signal

Status

LIVE

Outbound

CalDAV

Hub

cal.eleven11.pro

Why this exists

Calendars are either rented from Google or wired by hand, every time.

Most teams pick one of two failures. Either every schedule lives inside Google or Outlook — and every programmatic touch is rate-limited, OAuth-fragile, and bound to a vendor's product roadmap. Or they wire a brittle integration per surface — one for the booking form, one for the room display, one for the team availability bot — and every one of those grows its own auth shape and its own outage.

Cal is the opposite move. Ingest the sources you already have — Google OAuth, public ICS, third-party CalDAV — normalize into typed utility buckets in a Postgres you own, then expose a Radicale CalDAV server on the way out. Phones, desktops, and automations subscribe to a standard protocol. The vendor stays at the ingress. Your schedule is yours.

Self-sustained by design

Owned, not rented.

Cal exists because the calendar is the part of your operation most quietly captured by the vendor. We treat that capture as a problem to design out, not a convenience to lean into.

01

No Google at the point of use.

Google appears once, at OAuth ingress for accounts you choose to import. Every downstream consumer — phone subscribe, free/busy lookup, programmatic create — talks to your Radicale, not Google's API. No quotas between you and your own schedule.

02

Standard outbound, no vendor SDK.

Outbound is plain CalDAV. iOS, macOS, Thunderbird, Outlook, every cron-driven script — they all speak it natively. The day you walk away from cal, your subscribers keep working against any other CalDAV server you point them at.

03

Credentials encrypted at rest.

Source credentials live in Postgres as AES-256-GCM ciphertext, keyed by a scrypt-derived key with a versioned wire envelope. The encryption key sits in your server env, not in code, not in the repo, not in a third-party KMS you don't control.

04

Fail loud, fail visible.

Every sync writes a row to sync_runs. Every failure mode — 401, unreachable, terminal — fires a distinct alert event with its own idempotency key, so a 24h dedup window never collapses two different problems into one notification.

05

Single-operator, single-workspace by default.

Schema supports multi-tenant; runtime forces one workspace at boot. The honest answer to scale is per-tenant boxes, not multi-tenant rows. Until you decide otherwise, the surface stays small and the threat model stays legible.

The primitive

Three things you can name. Typed.

Cal is built on a small, opinionated trio — a source you ingest, a utility you bucket into, a packet that becomes the canonical event. Ten tables sit behind them, but the surface is three nouns. Everything else is a server action that mutates one of those three.

P-01 · Source

calendar_sources

kind: caldav | ics | google_oauth

One row per upstream you ingest. Google over OAuth, public ICS feeds, third-party CalDAV with stored credentials. Each source belongs to a utility, runs on a periodic tick, and writes a sync_runs row every cycle.

P-02 · Utility

utility_calendars

ota · content · team · personal · main

Five typed buckets seeded at boot. Each utility is an opinionated grouping with its own Radicale collection slug. The main utility is a virtual aggregator — it unions the utilities you've selected into a single subscribable view.

P-03 · Packet

event_packet

POST /v1/events/ingest

The canonical per-event record everything else egresses from. A booking-form submit, a Google ingress, an architect-driven create — they all land here as a typed packet, then the dispatcher routes per active subscription.

How it fits the fleet

The schedule layer adjacent tools read from.

Cal is rarely the surface customers think they're buying — but it's the schedule the rest of the fleet reaches into. Architect canvas tiles, outreach booking flows, alerts on missed syncs, kosh tables with booking rows — they all read and write through the same typed shape.

architect

Architect's canvas consumes cal.upcoming, cal.today, cal.free-busy, and cal.booking-form widgets via the v1 widget contract. Schedule shapes live next to your prose, not in another tab.

operator

Operator drives source CRUD, utility selection, sync runs, and main-view membership server-to-server over the e11-edge Docker network with x-api-key. cal-api is never browser-facing.

alerts

Sync failures fan out to alerts as cal.sync.failed, cal.source.401, cal.source.unreachable, cal.source.connected, and cal.egress.failed. Each carries a distinct idempotency key so the dedup window doesn't swallow distinct problems.

outreach

Outreach reads free-busy off cal when a prospect picks a time. The privacy primitive emits busy intervals only — no summary, no location, no attendee leakage.

kosh

Tenant tables that hold contacts and bookings sit alongside cal in the same per-tenant box. A booking row in Kosh and an event_packet in cal are different shapes of the same fact.

phones, desktops, scripts

caldav.eleven11.pro is fronted by Cloudflare Access and reverse-proxied to cal-radicale. iOS, macOS, Thunderbird, and any cron-driven CalDAV client subscribe directly. No integration vendor in the path.

Surfaces & contracts

Six things you actually call.

Cal is server-to-server first. Operator drives CRUD with x-api-key over the Docker network; architect calls the widget contract; phones subscribe to Radicale. No browser CORS surface, no public auth seam.

GET /v1/widgets

Widget discovery

Architect calls this first. Returns the descriptor list — cal.upcoming, cal.today, cal.free-busy, cal.booking-form — with stale-after hints and config schemas. Adding a widget is appending to a static descriptor list plus one route.

GET /v1/widgets/cal.free-busy

Privacy primitive

Emits {from, to, status: busy} intervals only. No summary, no location, no attendee names. This is the surface outreach and architect call when they need to know whether a slot is taken without knowing why.

POST /v1/events/ingest

Booking form submit

The other half of the widget contract. Validates event_type exists, validates metadata against the type's schema, sanitizes attendee emails (refuses control chars, comma, semicolon), inserts an event_packet, fire-and-forget queues egress.

GET /oauth/google/start

OAuth ingress

Single-use 10-minute state row, exchanged in /oauth/google/callback for a refresh_token. Stuffed into the AES-GCM credentials envelope, persisted as a calendar_source of kind google_oauth, fires cal.source.connected on success.

GET /v1/sync-runs

Append-only audit

Every inbound and outbound run writes a row — kind, status, error, eventsUpserted, timing. The /calendar/sync surface in operator reads from here. There is no soft-delete, no in-place mutation. The history is the history.

GET /v1/scheduler-state

Heartbeat

Singleton row the worker upserts on every setInterval fire and once at boot. Returns enabled, intervalMinutes, lastTickAt, nextTickAt. If the heartbeat goes stale, you know the worker is wedged before you discover it through a missed sync.

Senior engineering, visible

The proofs are in the substrate.

Five decisions visible in the network topology, the migration discipline, the egress retry shape, and the failure-mode catalog — not adjectives, design choices.

Two networks, on purpose.

cal-api and cal-radicale sit on e11-edge so operator and Caddy can reach them by name. cal-worker, cal-postgres, and cal-redis stay on the stack-local default network. Nothing outside the cal stack should reach the queue or the database.

Outbound is full-replace, not diff.

pushUtilityToRadicale deletes every object in the target collection then writes every event fresh. There is no clever diff layer to drift out of sync. The trade-off is named in the docs, not hidden under abstraction.

Egress retries with terminal alerts.

BullMQ runs five attempts with exponential backoff (3s → 6 → 12 → 24 → 48). On terminal failure the dispatcher writes egress_attempts.status=error and emits cal.egress.failed. Manual retry is one POST against /v1/admin/egress/retry.

Two inbound paths, named.

executeInboundSyncForSource (inline, blocks the response on /sync-now) and syncSourceById (queued, runs in the worker) are different code paths. The doc says they should converge eventually. We tell you that, instead of pretending the seam isn't there.

Migrations once, from one container.

MIGRATE_ON_START is true on cal-api, false on cal-worker. The same image powers both — compose overrides the worker's entrypoint. One image, one source of truth, no drift between API and worker code paths.

Who this is for

Teams whose schedule is load-bearing.

Cal earns its keep when the cost of someone else owning your calendar starts to exceed the cost of running a sync hub.

Teams that want a phone-subscribable schedule without sending every booking through Google.
Operators running booking flows where free-busy needs to be programmatic but attendee detail must stay private.
Engineering teams who'd rather subscribe to a CalDAV URL than wire a SaaS integration per surface.
Anyone whose current Google Calendar quota is the load-bearing piece of an automation pipeline.
Per-tenant SaaS shops where every customer should get their own utility calendars and their own Radicale principal.

FAQ

Final friction, reduced.

Does this replace Google Calendar?

No. Cal ingests Google through OAuth and lets the rest of your fleet read the result without touching Google again. If you and your team want to keep editing in the Google UI, you can. If you want to migrate off, the outbound CalDAV path means your phones and desktops keep working unchanged the day you do.

What does my phone see?

A standard CalDAV server at caldav.eleven11.pro, fronted by Cloudflare Access. iOS, macOS, Thunderbird, and Outlook all speak it natively. /v1/client-setup renders the exact URL plus the Radicale username for the device dialog.

How does the booking form work end-to-end?

Architect renders the cal.booking-form widget from a descriptor. The user submits, the widget posts to /v1/events/ingest, cal validates the metadata against the event_type schema, inserts an event_packet, queues an egress job, and the dispatcher writes to whichever subscriptions match — Radicale, Gmail, webhook, alerts.

What happens if cal-worker is down?

cal-api keeps serving reads — sources, events, widgets. Inline /sources/:id/sync-now still works because it bypasses the queue. Periodic syncs and egress fan-out pause until the worker is back. /v1/scheduler-state shows a stale lastTickAt; alerts flag the gap before a customer notices.

Discuss Cal

Bring your calendar home.

Cal is partner-deployed today — bundled with Architect's per-tenant box pattern or running on your own hardware. Talk to us about source migration, Radicale topology, or wiring the widget contract into an existing operator UI.

Direct line

Consultation requests stay owned. We reply from e11 after reviewing fit and timing.