How to use Nvidia Model Analyzer?

Hi all,

I am currently trying to run model analyzer following the steps mentioned in this link: Maximizing Deep Learning Inference Performance with NVIDIA Model Analyzer

However, when I download both chest_xray and segmentation_liver models from NGC and give their paths to model analyzer, it throws the following error:

Failed to load chest_xray on inference server: skipping model
Failed to load segmentation_liver on inference server: skipping model

Here is the docker command that I use to start model analyzer with.

docker run --gpus all -v /var/run/docker.sock:/var/run/docker.sock \
-v /home/$USER/server/docs/examples/model_repository:/home/models \
-v /home/$USER/results:/results --net=host model-analyzer:latest \
--batch 1,2,4 \
--concurrency 1,2,4 \
--model-names chest_xray,segmentation_liver \
--triton-version 20.02-py3 \
--model-folder /home/models \
--export --export-path /results/

How can I use this tool properly? Documentation is not enough.