Error while uploading model in AIAA Server

My Server is running at 127.0.1.1:80
After putting below command in a new docker container following error is coming.

curl -X PUT “http://127.0.1.1:80/admin/model/annotation_ct_spleen” -H “accept: application/json” -H “Content-Type: application/json” -d ‘{“path”:“nvidia/med/annotation_ct_spleen”,“version”:“1”}’
Error - >

500 Internal Server Error

Internal Server Error

The server encountered an internal error or misconfiguration and was unable to complete your request.

Please contact the server administrator at [no address given] to inform them of the time this error occurred, and the actions you performed just before this error.

More information about this error may be available in the server error log.


Apache/2.4.29 (Ubuntu) Server at 127.0.1.1 Port 80

Please help

Hi there,

Thanks for using!
I apologize that part of documentation is not up-to-date.
For annotation model please try the following:

curl -X PUT “http://127.0.0.1/admin/model/clara_ct_annotation_spleen_amp
-H “accept: application/json”
-H “Content-Type: application/json”
-d ‘{“path”:“nvidia/med/clara_ct_annotation_spleen_amp”,“version”:“1”}’

A list of available models for v2.0 can be found here:
https://ngc.nvidia.com/catalog/containers/nvidia:clara-train-sdk

I tried this as well and I got Invalid Model Config Error.

curl -X PUT “http://127.0.0.1/admin/model/clara_ct_seg_spleen_no_amp” \

 -H "accept: application/json" \
 -H "Content-Type: application/json" \
 -d '{"path":"nvidia/med/clara_ct_seg_spleen_no_amp","version":"1"}'

{“error”:{“message”:[“5”,“Invalid Model Config: /opt/nvidia/medical/nvmidl/apps/aas/actions/…/configs/models/clara_ct_seg_spleen_no_amp.json”],“type”:“AIAAException”},“success”:false}

Make sure you use - AIAA engine and put port number as well.

Run AIAA to run TF Inference engine

start_aas.sh --workspace /workspace/aiaa --engine AIAA

curl -X PUT “http://127.0.0.1:5000/admin/model/clara_ct_seg_spleen_no_amp

1 Like

I just try start and load the model using these commands and it works on my end.
What is your command to start the docker and start AIAA server?
Are you using the 3.0 version?

This worked for me. It was not mentioned in the notebook that we have to pass one more parameter --engine AIAA
But next question I have related to 3 D slicer.
I am passing this in server address http://127.0.0.1:5000/admin/model/clara_ct_seg_spleen_no_amp

Model also loaded but model is not visible as it is describe in the below link.

Model Name : annotation_mri_brain_tumors_tico_tc

Please let me know the correct server address to passed in the 3D slicer.

Thanks for your help.

I am unable to download Dextr3D. Can anyone help me in downloading that. It is mentioned in the usecase of 3D Slicer but no information has been provided that how to download it. I am am able to download deep grow and spleen segmentation algo but not Dextr3D.
Thanks!

Hi
Could you clarify what is this issue all models are listed on the landing page for clara train v3 https://ngc.nvidia.com/catalog/containers/nvidia:clara-train-sdk if you go to the middle section it list all models
for the spleen the Dextr3d is the annotation model found https://ngc.nvidia.com/catalog/models/nvidia:med:clara_ct_annotation_spleen_no_amp
Are you having issue downloading it from ngc or uploading to AIAA. Please share the error message is any

Thanks

As we have the curl command for uploading the inference on AIAA server. Do we have the curl command for Dexter3d model as well.

curl -X PUT “http://ip/admin/model/clara_deepgrow” -H “accept: application/json” -H “Content-Type: application/json” -d ‘{“path”:“nvidia/med/clara_train_deepgrow_aiaa_inference_only”,“version”:“1”}’

curl -X PUT “http://ip/admin/model/clara_ct_seg_spleen_no_amp
-H “accept: application/json”
-H “Content-Type: application/json”
-d ‘{“path”:“nvidia/med/clara_ct_seg_spleen_no_amp”,“version”:“1”}’

Hi
you should be able to use the NGC list command to get the names of all available models you then just need to change the name of the model

    curl -X PUT “http://ip/admin/model/**ModelNameinAIAA
[details="Summary"]
This text will be hidden
[/details]
**”
    -H “accept: application/json”
    -H “Content-Type: application/json”
    -d ‘{“path”:“nvidia/med/**ModelNameOnNGC**”,“version”:“1”}’

Alternatively you can simply download them from the web UI (see links in previous post), then upload the model to AIAA as per our documentation

Hope that helps