Skip to content

Cookbook

Every recipe below ships in the repo, expands inline, and is tested by CI. Click “View source” to see the full file; copy-paste it; the GitHub permalink at the bottom of each block lets you bookmark or link to it.

tests/test_examples.py parametrises over examples/ — when an API change breaks an example, the same PR sees red CI. Treat the recipes as part of the public surface.

Pytest recipes

Quickstart demo

The smallest working remote_lab_fixture example — three-line test against a two-router FRR topology.

View examples/quickstart/conftest.py
examples/quickstart/conftest.py
from neops_remote_lab.testing.fixture import remote_lab_fixture  # (1)

demo_lab = remote_lab_fixture("tests/topologies/demo.yml")  # (2)

View on GitHub

View examples/quickstart/test_demo.py
examples/quickstart/test_demo.py
def test_demo_lab_has_two_devices(demo_lab):  # (1)
    names = sorted(d.name for d in demo_lab)  # (2)
    assert names == ["r1", "r2"]

View on GitHub

View examples/quickstart/demo.yml
examples/quickstart/demo.yml
provider: clab                  # (1)
defaults:
  device: frr                   # (2)
nodes:
  r1:
    module: [ospf]
  r2:
    module: [ospf]
links:
  - r1-r2                       # (3)

View on GitHub

Shared-topology fixture

Multiple tests against one running lab via reuse_lab=True — the contention-collapse pattern.

View examples/pytest_fixtures/conftest.py
examples/pytest_fixtures/conftest.py
from neops_remote_lab.testing.fixture import remote_lab_fixture

frr_lab = remote_lab_fixture(
    "tests/topologies/frr.yml",
    reuse_lab=True,
)

View on GitHub

View examples/pytest_fixtures/test_frr_ospf.py
examples/pytest_fixtures/test_frr_ospf.py
def test_two_routers_present(frr_lab):
    names = sorted(d.name for d in frr_lab)
    assert names == ["r1", "r2"]


def test_devices_reported_by_netlab(frr_lab):
    # `d.raw` is the full `netlab inspect` dict for each node.
    assert all(d.raw for d in frr_lab)


def test_device_names_are_stable(frr_lab):
    # reuse_lab=True means this test shares the lab from the previous two.
    assert {d.name for d in frr_lab} == {"r1", "r2"}

View on GitHub

Python (no pytest) recipes

Smoke test

End-to-end RemoteLabClient lifecycle: acquire, list devices, release, close.

View examples/scripts/smoke.py
examples/scripts/smoke.py
"""Smoke test for `RemoteLabClient` — acquire a lab, list devices, release.

Usage:
    export REMOTE_LAB_URL=http://lab.example.com:8000
    python examples/scripts/smoke.py path/to/topology.yml
"""

import os
import pathlib
import sys

from neops_remote_lab.client import RemoteLabClient


def main(topology_path: str) -> None:
    client = RemoteLabClient(
        base_url=os.environ["REMOTE_LAB_URL"],
        session_timeout=120,  # fail fast if queue is deep
    )
    try:
        devices = client.acquire(
            topology=pathlib.Path(topology_path),
            reuse=False,
        )
        print(f"Acquired lab with {len(devices)} devices:")
        for d in devices:
            print(f"  {d.name}")
        client.release()
    finally:
        client.close()


if __name__ == "__main__":
    if len(sys.argv) != 2:
        sys.exit("Usage: smoke.py <topology.yml>")
    main(sys.argv[1])

View on GitHub

Context-manager wrapper

A with-statement wrapper that guarantees release on exception. Copy-paste-ready for any non-pytest script.

View examples/scripts/contextmanager_wrapper.py
examples/scripts/contextmanager_wrapper.py
"""A context-manager wrapper around RemoteLabClient.

`RemoteLabClient` does not implement `__enter__` / `__exit__`; this wrapper
ensures `close()` runs on the way out, even when the lab body raises.
"""

import contextlib
from collections.abc import Iterator

from neops_remote_lab.client import RemoteLabClient


@contextlib.contextmanager
def remote_lab_client(**kwargs) -> Iterator[RemoteLabClient]:
    client = RemoteLabClient(**kwargs)
    try:
        yield client
    finally:
        client.close()

View on GitHub

cURL / shell recipes

End-to-end session

The six-call lifecycle in one bash script — create, wait, acquire, inspect, release, delete.

View examples/curl/end_to_end_session.sh
examples/curl/end_to_end_session.sh
#!/usr/bin/env bash
# End-to-end Remote Lab Manager session via the REST API.
#
# Walks: create a session -> wait for ACTIVE -> upload topology -> release ->
# delete session. Use as a copy-pasteable reference; for production the Python
# client is preferred (examples/scripts/smoke.py).
#
# Usage:
#   BASE_URL=http://lab.example.com:8000 \
#     ./examples/curl/end_to_end_session.sh tests/topologies/simple_frr.yml

set -euo pipefail

: "${BASE_URL:?BASE_URL must be set, e.g. BASE_URL=http://lab.example.com:8000}"
TOPOLOGY="${1:?Usage: $0 <topology.yml>}"

# 1. Create a session (blocks only if the queue is non-empty on the server)
SESSION_ID=$(curl -s -X POST "$BASE_URL/session" | jq -r .session_id)

# 2. Wait for ACTIVE
while true; do
  STATUS=$(curl -s "$BASE_URL/session/$SESSION_ID" | jq -r .status)
  [[ "$STATUS" == "active" ]] && break
  sleep 2
done

# 3. Acquire the lab
curl -s -X POST "$BASE_URL/lab" \
  -H "X-Session-ID: $SESSION_ID" \
  -F "topology=@${TOPOLOGY}" \
  -F "reuse=true" | jq .

# 4. Run your automation against the devices listed in the response...

# 5. Release the ref-count, then end the session
curl -s -X POST "$BASE_URL/lab/release" -H "X-Session-ID: $SESSION_ID"
curl -s -X DELETE "$BASE_URL/session/$SESSION_ID"

View on GitHub

Poll-until-active

A timeout-bounded poll loop that survives queue contention.

View examples/curl/poll_until_active.sh
examples/curl/poll_until_active.sh
#!/usr/bin/env bash
# Create a session and poll until it reaches ACTIVE.
#
# The Python client and pytest fixture do this automatically with exponential
# backoff; this script is the equivalent for shell-based callers.
#
# Usage:
#   LAB_HOST=lab.example.com:8000 ./examples/curl/poll_until_active.sh

set -euo pipefail

: "${LAB_HOST:?LAB_HOST must be set, e.g. LAB_HOST=lab.example.com:8000}"

SESSION=$(curl -s -X POST "http://$LAB_HOST/session" | jq -r .session_id)

while true; do
    STATUS=$(curl -s "http://$LAB_HOST/session/$SESSION" | jq -r .status)
    [[ $STATUS == "active" ]] && break
    sleep 5
done

echo "$SESSION"

View on GitHub

Force-cleanup

Operator script: take the lab down via DELETE /lab?force=true using any ACTIVE session.

View examples/scripts/force_cleanup.sh
examples/scripts/force_cleanup.sh
#!/usr/bin/env bash
# Force-destroy a stuck lab via the REST API.
#
# Use when a lab is wedged: netlab up failed partway through, or a client
# crashed without releasing. Requires LAB_HOST to point at the Remote Lab
# Manager (e.g. lab.example.com:8000).
#
# Usage:
#   LAB_HOST=lab.example.com:8000 ./examples/scripts/force_cleanup.sh

set -euo pipefail

: "${LAB_HOST:?LAB_HOST must be set, e.g. LAB_HOST=lab.example.com:8000}"

SESSION_ID=$(curl -s -X POST "http://$LAB_HOST/session" | jq -r .session_id)

# Wait for ACTIVE
while [[ "$(curl -s "http://$LAB_HOST/session/$SESSION_ID" | jq -r .status)" != "active" ]]; do
  sleep 2
done

# Force destroy the lab
curl -s -X DELETE "http://$LAB_HOST/lab?force=true" \
  -H "X-Session-ID: $SESSION_ID"

# End the cleanup session
curl -s -X DELETE "http://$LAB_HOST/session/$SESSION_ID"

echo "Lab force-destroyed; cleanup session ended."

View on GitHub

Topology recipes

Minimal FRR

Two FRR routers, one link — fastest possible boot, the default for CI.

View examples/topologies/minimal_frr.yml
examples/topologies/minimal_frr.yml
provider: clab
defaults:
  device: frr

nodes:
  r1:
  r2:

links:
  - r1-r2

View on GitHub

Minimal SR Linux

Two-spine, two-leaf SR Linux fabric — the smallest Nokia-NOS example.

View examples/topologies/minimal_srlinux.yml
examples/topologies/minimal_srlinux.yml
provider: clab
defaults:
  device: srlinux

nodes:
  spine1:
  spine2:
  leaf1:
  leaf2:

links:
  - leaf1-spine1
  - leaf1-spine2
  - leaf2-spine1
  - leaf2-spine2

View on GitHub

Multi-vendor (FRR + SR Linux)

Per-node device: selection: an SR Linux spine with two FRR leaves. Useful when Worker SDK function blocks target both vendors.

View examples/topologies/multi_vendor_frr_srlinux.yml
examples/topologies/multi_vendor_frr_srlinux.yml
provider: clab

nodes:
  spine1:
    device: srlinux
  leaf1:
    device: frr
  leaf2:
    device: frr

links:
  - leaf1-spine1
  - leaf2-spine1

View on GitHub

Deployment artifacts

systemd unit

Drop-in neops-remote-lab.service for running the server under systemd.

View examples/systemd/neops-remote-lab.service
examples/systemd/neops-remote-lab.service
[Unit]
Description=neops-remote-lab Manager
After=network-online.target docker.service
Wants=network-online.target

[Service]
Type=simple
User=<SERVICE_USER>
Group=<SERVICE_USER>
# <INSTALL_PATH> is the pipx venv or virtualenv where neops-remote-lab was installed.
# With the default pipx layout, that is typically /home/<SERVICE_USER>/.local/pipx/venvs/neops-remote-lab.
ExecStart=<INSTALL_PATH>/bin/neops-remote-lab --host 0.0.0.0 --port 8000 --log-level INFO
Restart=on-failure
RestartSec=5
# Logs land in the journal by default (stdout/stderr). Override with --log-config to redirect.
StandardOutput=journal
StandardError=journal

[Install]
WantedBy=multi-user.target

View on GitHub

Worker SDK function-block test

For Neops devs using the Worker SDK, the function-block-test pattern lives in the SDK’s docs — not in this cookbook. Two-step setup (install + REMOTE_LAB_URL) on this side, then read:

Wanted

Recipes that don’t exist yet but would help. Open a PR if you have a good one:

  • Complete GitHub Actions workflow file as a runnable artifact. The Wire into CI page shows the env-var block; a full workflow with checkout, setup, retry policy, and timeout coordination would be more directly useful.
  • Advanced shared-topology pytest pattern demonstrating fixture-rank ordering across multiple test modules. The current example covers one module; a multi-module example would clarify how the ordering plugin reorders across files.

How CI keeps these honest

tests/test_examples.py parametrises over examples/topologies/*.{yml,yaml} and over the runnable scripts. A recipe that breaks the API gets a red CI on the same PR. The examples/README.md file documents the convention. Treat the recipes as part of the public surface — change them when the API changes, not after.

See also

  • Pytest fixtures — the API the pytest recipes consume.
  • Python client — the API the Python (no-pytest) recipes consume.
  • Drive from cURL — the cURL recipes are alternate entry points to the same six-call lifecycle.
  • Topology format — the YAML shape the topology recipes follow.
  • Vendor setup — the per-vendor install walkthroughs the topology recipes assume.