Clara Train SDK 4.0: Running Inference Script with CPU


I would like to run script with CPU, but I get following error:

No CUDA runtime is found, using CUDA_HOME=’/usr/local/cuda’
Error processing config /my_workspace/my_models/test_infer_pt_liver_and_tumor_ct_segmentation_v1/commands/…/config/config_inference.json: No CUDA GPUs are available
Traceback (most recent call last):
File “/opt/conda/lib/python3.8/”, line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File “/opt/conda/lib/python3.8/”, line 87, in _run_code
exec(code, run_globals)
File “apps/”, line 31, in
File “apps/”, line 23, in main
File “apps/”, line 60, in evaluate_mmar
File “<nvflare-0.1.4>/dlmed/utils/”, line 172, in configure
File “<nvflare-0.1.4>/dlmed/utils/”, line 167, in configure
File “<nvflare-0.1.4>/dlmed/utils/”, line 163, in _do_configure
File “apps/”, line 242, in finalize_config
File “apps/”, line 220, in _setup_model
File “/opt/conda/lib/python3.8/site-packages/torch/jit/”, line 161, in load
cpp_module = torch._C.import_ir_module(cu, str(f), map_location, _extra_files)
RuntimeError: No CUDA GPUs are available

I tried editing the torch files and docker-compose.yml file
“device”: “cpu”

But I get following error:
File “/opt/conda/lib/python3.8/site-packages/torch/cuda/”, line 166, in _lazy_init
RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from

Will you please tell me how to use CPU for running script with Clara Train SDK 4.0?

Best Regards,


Thanks for your interest in Clara Train SDK. The SDK doesn’t support running on CPU as we only test for GPU case. Sorry about that.