Clara Train SDK 4.0: Running Inference Script with CPU

Hello,

I would like to run infer.sh script with CPU, but I get following error:

No CUDA runtime is found, using CUDA_HOME=’/usr/local/cuda’
Error processing config /my_workspace/my_models/test_infer_pt_liver_and_tumor_ct_segmentation_v1/commands/…/config/config_inference.json: No CUDA GPUs are available
Traceback (most recent call last):
File “/opt/conda/lib/python3.8/runpy.py”, line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File “/opt/conda/lib/python3.8/runpy.py”, line 87, in _run_code
exec(code, run_globals)
File “apps/evaluate.py”, line 31, in
File “apps/evaluate.py”, line 23, in main
File “apps/mmar_conf.py”, line 60, in evaluate_mmar
File “<nvflare-0.1.4>/dlmed/utils/wfconf.py”, line 172, in configure
File “<nvflare-0.1.4>/dlmed/utils/wfconf.py”, line 167, in configure
File “<nvflare-0.1.4>/dlmed/utils/wfconf.py”, line 163, in _do_configure
File “apps/eval_configer.py”, line 242, in finalize_config
File “apps/eval_configer.py”, line 220, in _setup_model
File “/opt/conda/lib/python3.8/site-packages/torch/jit/_serialization.py”, line 161, in load
cpp_module = torch._C.import_ir_module(cu, str(f), map_location, _extra_files)
RuntimeError: No CUDA GPUs are available

I tried editing the torch files and docker-compose.yml file
–engine-state-device[“cpu”]
“device”: “cpu”
map_location=torch.device(‘cpu’)

But I get following error:
File “/opt/conda/lib/python3.8/site-packages/torch/cuda/init.py”, line 166, in _lazy_init
torch._C._cuda_init()
RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx

Will you please tell me how to use CPU for running infer.sh script with Clara Train SDK 4.0?

Best Regards,
Khyati

Hi

Thanks for your interest in Clara Train SDK. The SDK doesn’t support running on CPU as we only test for GPU case. Sorry about that.