I was looking at the “Bring Your Own Inference” of the Clara Train AIAA part (https://docs.nvidia.com/clara/tlt-mi/aiaa/byom/byoi.html). In the example config_aiaa.json, the inference part has no arguments. How does the custom inference receive the image? Is there a connection “underneath” between the components of the config file?
Also, the included pre/post transforms (LoadNifti, CopyProperties etc) are the same ones that are included in the ai4med package? I would like to check their documentation to see all the available arguments.