Skip to content

Pick a deployment

Four scenarios, four starting points. Pick the row that matches what you’re standing up; each routes into the right guide.

Scenario Path Setup time
Local laptop dev — one developer, no shared lab, no enclosure Run locally ~10 minutes
Single shared VM (small team) Netlab host → systemd (Operator runbook) → Headscale: quick ~1 hour
Multi-tenant lab (large team / multiple sites) Netlab host → systemd + reverse proxy → Headscale reference for ACLs 2–3 hours
CI runner pool Wire into CI → subnet routing for the runners (Headscale reference, or any equivalent VPN) ~30 minutes once a host exists

The shared-VM and multi-tenant rows assume the project’s recommended Headscale enclosure. Any equivalent network boundary works — managed Tailscale, WireGuard, internal-only VLAN with IP allowlists, mTLS at a reverse proxy. See Headscale: quick → Other approaches before committing.

When each shape is the right call

  •   Local laptop


    One developer, fast iteration, no shared state.

    Server runs in your terminal, foreground. Stop it when you stop working. No VPN, no lock recovery, no shared queue. Containerlab is greedy with CPU; offload to a VM when laptop heat or other-people’s-tests start to matter.

  •   Single shared VM


    Small team, one lab host, occasional contention.

    Single instance under systemd on a dedicated VM. Network enclosure so the team’s laptops and CI runners can reach the lab subnet — Headscale is the path we ship docs for, but any VPN/IP-allowlist scheme works. The default deployment shape; covers everything up to maybe a dozen developers.

  •   Multi-tenant lab


    Multiple teams, sometimes several sites, ACLs needed.

    Same systemd-managed instance, plus reverse proxy (TLS), and ACLs restricting subnet access by user/tag (Headscale ACLs in our reference deployment; any equivalent policy plane fits). The lab queue serializes contention; the ACL restricts who can queue at all. Still one process per host — the one-server invariant doesn’t change.

  •   CI runner pool


    Automated tests against an existing host.

    Runners need subnet-route access to the lab host (tailnet membership in the Headscale path; equivalent on whichever enclosure you picked) and the four REMOTE_LAB_* env vars. Set queue tuning per concurrency. The host itself is one of the three shapes above.

What’s the same regardless

Every deployment shares the load-bearing parts:

  • One server per host. Filelock catches a second instance.
  • One lab per host. LabManager + GLOBAL_LOCK — Netlab can’t run two topologies anyway.
  • X-Session-ID is the only access boundary. Security model. VPN is mandatory; the service has no HTTP authentication.
  • Topology identity is the SHA-256 of file content. Lab lifecycle. Reuse just works across copy-pasted topologies.

What changes between shapes

Concern Local Single VM Multi-tenant CI pool
Process supervisor foreground systemd systemd + reverse proxy n/a (uses an existing host)
Network enclosure none VPN or VLAN (Headscale recommended) VPN + ACLs + TLS (Headscale recommended) enclosure-specific auth per runner
Lock recovery Ctrl+C and restart Stale-lock procedure same, plus monitoring n/a
Vendor images whatever you need locally the team’s defaults per-tenant defaults possible inherits from host
Concurrency tuning n/a client REMOTE_LAB_SESSION_TIMEOUT server _WAITING_SESSION_TIMEOUT matters Queue contention math

See also

  • Netlab host — install Netlab + Containerlab rootless. Prerequisite for every deployment except “Local laptop” running on a dev’s machine.
  • Headscale: quick — five-command Headscale + Headplane setup; also lists the alternatives (managed Tailscale, WireGuard, IP allowlists, mTLS).
  • Headscale reference — ACLs, OIDC, troubleshooting (Headscale-specific).
  • Operator runbook — install via uv/pipx, run under systemd, recover from stale locks.
  • Security model — what the X-Session-ID gate protects against and what it doesn’t.
  • Wire into CI — runner pipeline shapes and queue-tuning pointers.