How to pull pre-trained model from NGC. The command doesn't work provided by Nvidia Official Guide.

Hi,

I have configured the environment that clara-train-sdk need in my server. And I can run the annotation server successfully.

Because there is no pre-trained model in docker environment. I run the command provided in Nvidia Official Guidehttps://docs.nvidia.com/clara/aiaa/tlt-mi-ai-an-sdk-getting-started/index.html

curl -X PUT "http://0.0.0.0:5000/admin/model/annotation_ct_spleen" \
     -H "accept: application/json" \
     -H "Content-Type: application/json" \
     -d '{"path":"nvidia/med/annotation_ct_spleen","version":"1"}'

But I got the error return:

Model/Config Not Found; Check NGC Path or Try to load from file-system with valid model+config

Can somebody help me…

Thanks.

Best,

pengbol

Hi,

Thanks for trying out AIAA and sorry for the troubles. On the machine you run the annotation server on (start_aas.sh); what happens when you issues the following command:

curl http://0.0.0.0:5000/v1/models
?

One other thought; on your browser (say in a client machine); are you able to access the server API. For example:

http://YOUR.AASERVER.IP.ADDRESS:5000/v1

?

meanwhile - let me see if i can reproduce this.

OK - i haven’t been able to reproduce your issue. I’ve gone through the instructions in the user guide. If the AA server has successfully launched and after issuing the curl commands; you can verify that the models are loaded through a client browser:

http://YOUR.AASERVER.IP.ADDRESS:5000/v1/models

[{“description”: “A pre-trained model for volumetric (3D) annotation of the spleen from CT image”, “version”: “1”, “sigma”: 3.0, “internal name”: “annotation_ct_spleen”, “roi”: [128, 128, 128], “name”: “annotation_ct_spleen”, “labels”: [“spleen”], “type”: “annotation”, “padding”: 20.0}, {“description”: “A pre-trained model for volumetric (3D) segmentation of the spleen from CT image”, “version”: “1”, “internal name”: “segmentation_ct_spleen”, “name”: “segmentation_ct_spleen”, “labels”: [“spleen”], “type”: “segmentation”}]

that being said; I am running the nvcr.io/nvidia/clara-train-sdk:v1.1-py3 docker image although i don’t think the previous version would have been any different in this regard.

I have a problem to get the pretrained model from NGC.
If I run the command

docker run $NVIDIA_RUNTIME -it --rm -p 5000:5000 $DOCKER_IMAGE start_aas.sh

the server just shows

Scheduler started
 * Serving Flask app "AIAAServer" (lazy loading)
 * Environment: production
   WARNING: This is a development server. Do not use it in a production deployment.
   Use a production WSGI server instead.
 * Debug mode: off

I can visit http://MY.AASERVER.IP.ADDRESS:5000/v1/models and http://MY.AASERVER.IP.ADDRESS:5000/v1/docs,but the server is stucked here,so I can’t paste and run the command

curl -X PUT "http://0.0.0.0:5000/admin/model/annotation_ct_spleen" \
     -H "accept: application/json" \
     -H "Content-Type: application/json" \
     -d '{"path":"nvidia/med/annotation_ct_spleen","version":"1"}'

to get the pretrained model.
Can somebody help me?

Thanks

I tried another way to get the pretrained model.
I startedd a docker container based on the nvcr.io/nvidia/clara-train-sdk:v1.1-py3 image.And when I tried to run the command

curl -X PUT "http://0.0.0.0:5000/admin/model/annotation_ct_spleen" \
-H "accept: application/json" \
-H "Content-Type: application/json" \
-d '{"path":"nvidia/med/annotation_ct_spleen","version":"1"}'

inside the container,an error occurs:

% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0curl: (7) Failed to connect to 0.0.0.0 port 5000: Connection refused
curl: (6) Could not resolve host: accept
curl: (6) Could not resolve host: Content-Type

[1/2]: "path":"nvidia/med/annotation_ct_spleen" --> <stdout>
curl: (6) Could not resolve host: "path"

[2/2]: "version":"1" --> <stdout>
curl: (6) Could not resolve host: "version"

Can somebody help me…

Thanks

Hi aquraini,

Thanks for your reply~ I have tried it again today. I found the reason is the unstable internet connection.

I have pull many pretrained model from NGC successfuly!

Thank you very much!

BTW. I have another question… Why I can’t get the correct response when I run the command

curl http://0.0.0.0:5000/v1/models

in my local computer terminal? Shouldn’t docker just be an environment lunched base on my local computer? (I can get the info of pre-trained model correctly when I run the command in docker environment terminal).

Thank you!

Best,

pengbol

Hi liyinhao0413,

You should lunch the annotation server firstly. with thie command:

docker run $NVIDIA_RUNTIME -it --rm -p 5000:5000 $DOCKER_IMAGE start_aas.sh

And then, you should open another terminal to get into the docker environment:

docker run -it --rm --ipc=host --net=host --runtime=nvidia --mount type=bind,source=/your/dataset/location,target=/workspace/data $dockerImage /bin/bash

Then you can curl pre-trained model.

pengbol

Hi pengbo,

I don’t know how to reply to private message,
it is not working now.

So I will just address your problem here.
the ip 0.0.0.0 means the server and who ever run this curl is in the same machine.

If your AIAA server is not running on your local computer,
u should be using curl http://[ip to aiaa server]:5000/v1/models
to check on what models you have loaded into your AIAA server

If you have other questions you can either post or reply to this thread.

Hi pengbo18555,

Thanks for your reply.

I followed your instruction.I run this command

nvidia-docker run --runtime=nvidia -it --rm -p 5000:5000 nvcr.io/nvidia/clara-train-sdk:v1.1-py3 start_aas.sh

in one terminal,and run this command

nvidia-docker run --runtime=nvidia -it --rm -v /home/workspace/liyinhao/AIAA:/workspace/tlt-experiments nvcr.io/nvidia/clara-train-sdk:v1.1-py3 /bin/bash

in another terminal.

But when I run this command

curl -X PUT "http://0.0.0.0:5000/admin/model/annotation_ct_spleen" \
> -H "accept: application/json" \
> -H "Content-Type: application/json" \
> -d '{"path":"nvidia/med/annotation_ct_spleen","version":"1"}'

in the terminal and try to get the pretrained model,an error occurs:

curl: (7) Failed to connect to 0.0.0.0 port 5000: Connection refused

Thank you!

Best,

liyinhao

Hi yuantingh,

Wow, really thanks for your reply for me.

I get it! I will try using curl http://[ip to aiaa server]:5000/v1/models to get the response.

By the way, can I put http://[ip to aiaa server]:5000/v1/models into my MITK URI block to visit remote server in my local machine(server running on remote machine)? I will try it too. And I will be grateful if you point out my understand mistake~

And I also have another questions:

Firstly, I found the pretrained segmentation model doesn’t work good on my own data. For example, segmentation_ct_liver_and_tumor and segmentation_ct_lung_tumor. But annotation_ct_liver is good on it. I am a little curious. Maybe data distribution is a little different between data in videos and my own data.

Secondly, I want to konw how to realize that a complicated structure annotation in a 3D image. I found the 6-point-annotation can run just once in a image with the same label. When clear the point and point out 6 new points, old annotation gone when comfirm the new points.
Like bone with several parts. Should I consider them as a whole object?

Thirdly, I found after run auto segmentation module, my MITK client become a little not fluent when I interface with it. Is this normal?

Thank you very much again!

pengbol

Hi liyinhao0413,

You should run command:

docker run -it --rm --ipc=host --net=host --runtime=nvidia --mount type=bind,source=/your/dataset/location,target=/workspace/data $dockerImage /bin/bash

–ipc=host --new=host is necessary.

Best,

pengbol

Hi pengbo,

  1. Yes, you can put that in your MITK URI
  2. Yes, we are using the data from here: http://medicaldecathlon.com/
    Make sure your test data has the same orientation and contrast as the data there
  3. This is a interesting use case, currently this is not supported. We will see if we
    want to add this in next release
  4. We will investigate on this issue.

Thank you for your valuable feedback.