The Problem: > I am attempting to onboard NemoClaw (OpenClaw + OpenShell). While nvidia-smi correctly detects the GPU in WSL2, the sandbox initialization fails consistently.
Reproduction Steps:
Run nemoclaw onboard.
openshell doctor check passes all system requirements.
Step [3/7] “Creating sandbox” reports success.
Immediately after, any command to the sandbox (like applying policy presets in Step 7) returns status: NotFound, message: "sandbox not found".
Registry Issue: Additionally, docker pull for nvcr.io/nim/nvidia/nemotron-3-nano-30b-a3b:latest returns “Access Denied” even after successful docker login with a valid NGC Personal API Key.
Hit the same issue on an RTX 5090 Laptop + WSL2 Ubuntu 24.04 + Docker Desktop. Spent hours debugging it.
Root cause:nemoclaw onboard forces --gpu on both openshell gateway start and openshell sandbox create when it detects nvidia-smi. On WSL2 with Docker Desktop, the GPU can’t pass through to the k3s cluster inside the gateway container. The sandbox reports “created” but is immediately dead — every command after that returns “sandbox not found.”
Workaround — bypass nemoclaw onboard entirely and drive openshell directly without --gpu:
# Clean stale state (critical — failed runs corrupt k3s)
openshell sandbox delete <name> 2>/dev/null
openshell gateway destroy --name nemoclaw 2>/dev/null
docker volume rm openshell-cluster-nemoclaw 2>/dev/null
# Start gateway WITHOUT --gpu
openshell gateway start --name nemoclaw
# Create provider BEFORE sandbox (credentials injected at creation time)
openshell provider create --name nvidia-nim --type nvidia --credential NVIDIA_API_KEY=nvapi-xxx
# Set inference route
openshell inference set --provider nvidia-nim --model nvidia/nemotron-3-super-120b-a12b
# Create sandbox WITHOUT --gpu
openshell sandbox create --name my-sandbox --from openclaw
Once inside the sandbox, run openclaw onboard and select Custom Provider with base URL https://inference.local/v1 (OpenAI-compatible). Don’t use the real NVIDIA URL — the sandbox blocks outbound network. All inference routes through OpenShell’s internal proxy.
Awesome! Just a extra bit: since I initially commented, I’ve made additional updates to the quickstart repo, so that it now supports local inference as well as cloud.