Issue with AutoML in Clara v4.0

Hi all,

I’m trying to launch an AutoML session for hyperparameters tuning. I followed the steps described here:

https://docs.nvidia.com/clara/clara-train-sdk/automl/automl_user_guide.html

However, the AutoML session fails with the following error:

When the AutoML session starts, the following message is shown:

So I tried exposing the port 33330 in the docker container with the --expose and -p flags but it doesn’t work. I also tried setting and unsetting the flags “–network=host --ipc=host” while launching the docker, but still got the same error. I looked at the port use with netstat and there is no application using that port.

Do you have any idea on what could be the problem ?

Thanks in advance,

Gonzalo

Hi
Thanks for your interest in Clara Train SDK. Please note we have recently release clara train V4.1 based on MONAI 0.8

You don’t need to expose the port outside the docker container. This feature is to control the automl from a different shell that you can start separately. however, it does need to be inside the container to find the code.

is the error address already in use happening on the first try after is shows starting on port 33330? or you are trying to run 2 instances at the same time ?

The easiest way to try automl is to use the automl example mmar from NGC clara_train_automl_mri_prostate_cg_and_pz | NVIDIA NGC

is that what you are using ? Also are you using the latest sdk V4.1 just released 2 weeks ago?