Pytest fixtures
remote_lab_fixture is the stable public API. Three lines of test code; the queue, the lifecycle, and teardown disappear into the fixture.
Using this from the Worker SDK?
The Worker SDK imports remote_lab_fixture directly. Read With Worker SDK and the SDK’s Remote lab testing guide for the function-block-test patterns.
Happy path: three tests, one running lab
The shape you’ll write 90% of the time — reuse_lab=True so a fleet of small assertions against the same topology pays the netlab up cost once:
from neops_remote_lab.testing.fixture import remote_lab_fixture
# One topology, shared across every test that requests `simple_lab`.
simple_lab = remote_lab_fixture(
"tests/topologies/simple_frr.yml",
reuse_lab=True, # (1)!
)
- With
reuse_lab=True, the second test that requestssimple_labfinds the lab already running and just bumps the reference count. Without it, the first test’s teardown tears the lab down — the second pays the boot cost again.
def test_two_routers_present(simple_lab):
assert len(simple_lab) == 2
def test_devices_have_names(simple_lab):
assert {d.name for d in simple_lab} == {"r1", "r2"}
def test_device_metadata_is_a_dict(simple_lab):
for d in simple_lab:
assert isinstance(d.raw, dict)
What you’ll see
tests/test_frr_routing.py::test_two_routers_present
[INFO] Created session 4b8c... at queue position 0
[INFO] Session 4b8c... is active after 0.3s
[INFO] Lab acquired successfully (reused=False)
PASSED
tests/test_frr_routing.py::test_devices_have_names
[INFO] Lab acquired successfully (reused=True)
PASSED
tests/test_frr_routing.py::test_device_metadata_is_a_dict
[INFO] Lab acquired successfully (reused=True)
PASSED
[INFO] Releasing remote lab for simple_frr.yml
Three tests; one boot; three reuses. The final release drops the refcount to zero — the lab idles until the pytest session ends, then atexit cleans it up.
How the plugin loads
Installing neops-remote-lab registers a pytest plugin via the package’s entry points. You do not add anything to conftest.py to enable it; you only need to import remote_lab_fixture at module scope where you want a fixture.
remote_lab_fixture (factory)
from neops_remote_lab.testing.fixture import remote_lab_fixture
remote_lab_fixture(
topology: str | Path,
*,
name: str | None = None,
reuse_lab: bool = False,
) -> pytest.fixture
Creates a function-scoped pytest fixture bound to a Netlab topology. Call it
at module level in conftest.py or in your test file — the return value is
a real pytest.fixture, so assigning it to a name makes that name available
to any test in the same collection scope.
Arguments
| Argument | Type | Default | Description |
|---|---|---|---|
topology |
str \| Path |
– | Path to a Netlab .yml file. Expanded (~) and resolved to an absolute path at factory call time. The file must exist at factory call time. |
name |
str \| None |
None (keyword-only) |
Fixture name override. When omitted, the fixture name defaults to the topology file’s stem (e.g. demo.yml → demo). |
reuse_lab |
bool |
False (keyword-only) |
When True, the acquire call uses reuse=true; the server increments the reference count on an already-running lab with the same content hash instead of refusing. |
Raises
FileNotFoundErrorwhen the resolved topology path does not exist. This happens at factory call time — during test collection — so typos fail before any test runs.
Returns
A pytest fixture of scope function.
The fixture yields list[DeviceInfoDto] — one entry per device in the
running topology.
What the generated fixture does
On each test that depends on it, the fixture executes this lifecycle:
- Resolves the session-scoped
remote_lab_client(creating it on the first test that needs a lab). - Calls
client.acquire(topology_path, reuse=reuse_lab), which blocks through the server’s 423 polling loop until the lab is running. - Yields the device list to your test.
- On teardown — whether the test passed or raised — calls
client.release(). On areuse_lab=Falselab this triggers teardown; onreuse_lab=Trueit decrements the reference count.
The fixture does not catch exceptions in your test body. An assertion failure propagates normally and the teardown path still runs release.
Fixture naming
The name assigned to the factory’s return value in your module is what tests
reference. The factory also registers an internal name (the name kwarg or
the topology stem) that the collection-time ordering plugin uses to group
tests by shared lab.
Alongside the name, the factory stashes the fixture’s rank, reuse flag,
topology path, and remote-mode flag in a module-level metadata dict that
the ordering plugin reads at collection time.
Use reuse for fast suites
For a suite of many small assertions against the same topology, declare
one reuse_lab=True fixture and point every test at it. The first test
pays the netlab up cost; the rest run in seconds.
remote_lab_client (session-scoped fixture)
A single RemoteLabClient is shared across every test in a pytest session.
You rarely depend on it directly — remote_lab_fixture pulls it in for you
— but you can request it when you need the underlying client API inside a
test.
Behavior
- Scope:
session. Exactly one client per pytest process. - Created lazily on the first test that requests it (directly or via a
remote_lab_fixture). - Fails fast if
REMOTE_LAB_URLis not set when the fixture is first resolved, raisingRuntimeErrorwith a pointed error message. - Honors timeout overrides. When
REMOTE_LAB_REQUEST_TIMEOUT,REMOTE_LAB_SESSION_TIMEOUT, orREMOTE_LAB_ACQUISITION_TIMEOUTare set in the environment, they override the client’s defaults. - Teardown. Both a pytest session finalizer and an
atexithandler callclient.close(), so the session is always ended even on abnormal pytest exit (SIGINT, worker crash).
Directly requesting the client
| tests/test_advanced.py | |
|---|---|
The one-fixture-per-test rule
One remote_lab_fixture per test — checked at collection time
A test may depend on at most one fixture created by
remote_lab_fixture. Requesting two causes pytest collection to fail
with ValueError — the tests never run.
Why
The server enforces one-lab-per-host as a Netlab limitation. A single test holding two lab fixtures would deadlock at acquire — the second acquire would sit in the 423 polling loop forever, because the first acquire’s session still holds the host.
The plugin catches this at collection so you see the error immediately, not
after pytest has spent five minutes running earlier tests.
What it looks like when it fails
| A test that will fail collection | |
|---|---|
- Requesting two lab fixtures in a single test. Pytest never executes
test_cross_topology— it errors out of collection first.
Collection error you’ll see
How to work around it
Split the scenario into two tests. If the scenario requires two topologies
to run back-to-back in the same process, use reuse_lab=True on one and
sequence them via pytest’s normal ordering — which, for lab fixtures, the
plugin deterministically groups by fixture rank (see below).
Test execution ordering
The plugin reorders collected tests so that every test using the same lab fixture runs in one contiguous block. Within that block, the original collection order is preserved.
Rank is assigned at factory-call time by a monotonically increasing counter
— the first remote_lab_fixture(...) call in the module gets rank 0, the
next rank 1, and so on. Reorganizing your conftest.py therefore changes
the run order; pin the order intentionally or leave it to topology-stem
alphabetical.
Without reordering: test_a(lab1), test_b(lab2), test_c(lab1)
-> two teardowns of lab1 if reuse_lab=False
With the plugin: test_a(lab1), test_c(lab1), test_b(lab2)
-> one teardown of lab1, one acquire of lab2
Running pytest --log-cli-level=info to see the order
The plugin logs the computed execution order to the remote-lab-plugin
logger at INFO level, including topology, reuse flag, and the node IDs
in each group. Add --log-cli-level=info while debugging to watch
pytest’s grouping decisions.
End-to-end example
| tests/conftest.py | |
|---|---|
Run the whole file:
Expected output (abbreviated)
Only the first test pays the netlab up cost; the remaining tests reuse the
running lab and release at the end of the pytest session.
Ecosystem note
remote_lab_fixture is the import path that neops-worker-sdk-py uses when
composing its own lab-backed function-block test harness. Downstream test
code written against this API in worker-sdk-py is expected to keep working
across patch and minor versions of this project. If you are maintaining this
project, treat remote_lab_fixture’s signature, remote_lab_client’s name
and scope, and the one-fixture-per-test rule as a public interface — change
them only when you’re ready to coordinate a major-version release with
downstream consumers.
Configuration
The fixture reads four environment variables on first use: REMOTE_LAB_URL (required, points at the server) and three optional timeout overrides — REMOTE_LAB_REQUEST_TIMEOUT, REMOTE_LAB_SESSION_TIMEOUT, REMOTE_LAB_ACQUISITION_TIMEOUT. Set them in your shell or your CI env block before invoking pytest. See Configuration for the full reference, defaults, and the timeout-coordination guidance.
See also
- RemoteLabClient reference — the HTTP client the fixtures wrap.
- Client config — the environment variables the
remote_lab_clientfixture reads. - Lab lifecycle — reference counting and reuse semantics (relevant when
reuse_lab=True). - Topology format — what to put in the
.ymlfile.