Skip to content

Debugging

Strategies for diagnosing issues in function blocks, connections, and tests.


Debugger Setup

Set breakpoints in run(), acquire(), or capability methods. Disable the blocking detector during debugging — it fires false positives while you step through code.

=== "VS Code"

Add this configuration to `.vscode/launch.json`:

```json
{
    "version": "0.2.0",
    "configurations": [
        {
            "name": "Debug Function Block Tests",
            "type": "debugpy",
            "request": "launch",
            "module": "pytest",
            "args": [
                "tests/",
                "-v",
                "-s",
                "--no-header",
                "-k", "${input:testFilter}"
            ],
            "console": "integratedTerminal",
            "justMyCode": false,
            "env": {
                "BLOCKING_DETECTION_THRESHOLD": "0"
            }
        }
    ],
    "inputs": [
        {
            "id": "testFilter",
            "description": "Test name filter (-k pattern)",
            "default": "",
            "type": "promptString"
        }
    ]
}
```

`BLOCKING_DETECTION_THRESHOLD=0` disables the blocking detector so it does
not interfere with stepping.

=== "PyCharm"

**Create a pytest run configuration:**

1. Go to *Run > Edit Configurations > Add (+) > pytest*
2. Set **Target** to your test directory (e.g. `tests/`)
3. Set **Working directory** to the project root
4. Under **Environment variables**, add:

    | Variable | Value | Purpose |
    |---|---|---|
    | `BLOCKING_DETECTION_THRESHOLD` | `0` | Disable blocking detector during debugging |
    | `REMOTE_LAB_URL` | `https://your-lab.example.com` | Remote lab endpoint (if using remote lab tests) |

5. Click **OK** and run with the **Debug** button (or ++shift+f9++)

**Tips:**

- Use *Run > Debug* (not *Run > Run*) to enable breakpoints
- PyCharm's *Evaluate Expression* (++alt+f8++) is useful for inspecting
  `DeviceTypeDto`, `WorkflowContext`, and Pydantic model fields mid-debug
- Add `-s` to **Additional Arguments** to see `print()` and logger output
  in the debug console
- For remote lab tests, set `REMOTE_LAB_URL` in the run configuration
  rather than relying on a `.env` file — PyCharm does not auto-load `.env`
  by default

Common Error Patterns

Connection Errors

Error Typical cause Fix
ConnectionValidationError Device missing ip, username, password, or platform Check DeviceTypeDto fields in your test context
PluginNotFoundError No plugin registered for the platform/type/library Import the plugin module; verify platform string matches
AmbiguousPluginError Multiple plugins match, no default set Specify connection_library explicitly or set default_for_connection_type=True
ConnectionCreationError Auth failure, unreachable host, missing third-party library Check credentials, network reachability, and pip list
NotImplementedForThisPlatform Plugin does not implement the called capability method Check exc.context for details; use get_raw_connection() as fallback

Context Errors

Error Typical cause Fix
StopIteration entity_id doesn't match any device in the context Ensure entity_id in create_workflow_context matches a device .id
ValueError: entity_id is required entity_id=None when run_on="device" Pass a valid entity_id

Parameter/Result Errors

Error Typical cause Fix
pydantic.ValidationError Required field missing, wrong type, or extra field on result Check parameter class fields and types
Extra field warnings in logs Workflow engine sends fields not in your params model This is normal — extra="ignore" drops them silently

Blocking Detector Warnings

When you see:

WARNING | Blocking operation detected in thread 'FunctionBlock-worker'
         Duration: 2.3s (threshold: 0.5s)

This means a synchronous (blocking) call ran inside the async event loop without @run_in_thread. Common causes:

  • Netmiko / scrapli send_command() called without @run_in_thread
  • time.sleep() instead of asyncio.sleep()
  • Synchronous file I/O or HTTP requests

Fix: Wrap the blocking call with @run_in_thread or move it to a dedicated thread.

To adjust sensitivity:

export BLOCKING_DETECTION_THRESHOLD=1.0   # seconds
export BLOCKING_DETECTION_THRESHOLD=0     # disable

Logging for Debugging

Enable debug-level logging to see function block lifecycle events:

from neops_worker_sdk.logger.logger import Logger

logger = Logger()
logger.debug("Starting config collection", device=device.hostname)

Use with_context() for structured correlation:

ctx_logger = self.logger.with_context(
    device_id=context.device.id,
    hostname=context.device.hostname,
)
ctx_logger.info("Connected successfully")
ctx_logger.debug("Raw output", output=raw_output[:200])

Remote Lab Debugging

See Remote Lab Testing for setup and configuration.

Symptom Checklist
Tests hang waiting for lab Is REMOTE_LAB_URL reachable? Check curl $REMOTE_LAB_URL/healthz
423 Locked response Another session is active; your test is queued. Wait or check server logs
Devices unreachable after provisioning Routing to lab subnet? Check Tailscale/WireGuard routes
Lab fails to provision Check server logs (--log-level debug); verify topology YAML syntax

Common Gotchas

Issue Symptom Fix
Forgot @pytest.mark.asyncio RuntimeWarning: coroutine was never awaited Add the marker or use asyncio_mode = "auto"
Plugin not imported PluginNotFoundError despite plugin existing Add import my_package.plugins in conftest or test file
Registry pollution between tests AmbiguousPluginError in unrelated tests Use clear_registry() fixture (see Testing Plugins)
@fb_test_case test not discovered No test output Decorator must be applied to the class definition, not an instance
Mock context missing entity_id StopIteration or ValueError Match entity_id to a device in the context's device list