How to generate config_aiaa.json when I wanna unload my model to AIAA?

Zip the checkpoint files

zip model.zip
model.ckpt.data-00000-of-00001
model.ckpt.index
model.ckpt.meta

curl -X PUT “http://127.0.0.1/admin/model/clara_ct_seg_spleen_amp
-F “config=@config_aiaa.json;type=application/json”
-F “data=@model.zip”

Can anyone teach me how to generate config_aiaa.json ?

Greetings Mark,

Thanks for your interest in Clara Train. May i suggest you take a look at the MMARs for any of the pre-trained models available on NGC, under the config directory there is a config_aiaa.json that you may use as a template. Here’s an example pointer to an MMAR you may download:

https://ngc.nvidia.com/catalog/models/nvidia:med:clara_ct_seg_spleen_amp

Hope this helps.

How do I write predict code when I output if I upload my model success?
In NGC segmentation model, it’s output is nii. I don’t understand how to control this output,if I want another output in my upload model.

Thank you for further clarifying - you should have flexibility in writing the result back to the client; perhaps you can make use of the bring your own writer functionality?

https://docs.nvidia.com/clara/tlt-mi/clara-train-sdk-v3.0/aiaa/byom/byow.html

Does load model to predict that write in writer functionality,instead of inference functionality?
I don’t understand associate writer with inference functionality.

Hi I don’t understand your question here.

If you want to write your own inference procedure please follows examples in https://docs.nvidia.com/clara/tlt-mi/clara-train-sdk-v3.0/aiaa/byom/byoi.html

What AIAA does is as follows:

  1. It receives an image from user, let’s call it O.
  2. AIAA runs the pre-transform chain (that specified in the config) on O and get a result P.
  3. The result § is feed into the network and we get Q
  4. AIAA runs post-tranforms on the predicted result (Q) that come out of network.
  5. Then the writer is invoke to write result (Q) out to a tmp file (F).
  6. Finally that tmp file is returned to the user.