Is there any ways to get input/ouput name and dimensions info of ETLT models and SerializedEngine models

Please provide the following information when requesting support.

• Hardware (T4/V100/Xavier/Nano/etc) : Any
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc) : Any
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here) : “nvidia/tao/tao-toolkit-tf: v3.21.11-tf1.15.5-py3”
• Training spec file(If have, please share here) : None
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.) : None


I have various etlt-models and serialized-engine-models (which was previously converted by tao-converter).

I have to know input-name and dimensions of etlt models because tao-converter requires -p option for dynamic batch models.
I also have to know input/output name & shapes of serialized-engine-models for triton configuration.

Is there any (regular) ways to get input/ouput name and dimensions info of etlt/serialized-engine models?

trtexec --loadEngine=a.engine --exportOutput=abc.json prints some info but

  • this solution can be applied only to serialized engine file
  • got dimensions sometimes include batchdim and sometimes not depending on models
  • this solution does not print input name and shape

There is a simple way as below.

$ python -m pip install colored
$ python -m pip install polygraphy --index-url https://pypi.ngc.nvidia.com
$ polygraphy inspect model your_trt_engine

Thank you very much.
I confirmed that I can get name, shape explicitBatch/ImplicitBatch infomations for serialized engine models. I would have liked to have json output if possible, but it is ok because it can be parsed.

I still have two questions.

  1. Is there any ways for etlt models?

  2. This is an unimportant issue for me, but the output dtype seems to be float32 for a INT8 model. Is it a expected behavior? The model was converted by trtexec -t int8 command.

    ---- 1 Engine Input(s) ----
    {input_1 [dtype=float32, shape=(3, 544, 960)]}

    ---- 2 Engine Output(s) ----
    {output_bbox/BiasAdd [dtype=float32, shape=(12, 34, 60)],
     output_cov/Sigmoid [dtype=float32, shape=(3, 34, 60)]}

1), For etlt model, you can refer to tao user guide or its config file. For example,

uff-input-blob-name=Input
output-blob-names=BatchedNMS

2), It is expected. The output layer has float32 format.

1), For etlt model, you can refer to tao user guide or its config file. For example,

Some models do not provide input/ouput name infomatation.
for example: deepstream_tao_apps/pgie_unet_tao_config.txt at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub does not provide input name

If there is a way to know it for any etlt models, please let me know.
Input name (and shape) is needed to convert with tao-converter.

2), It is expected. The output layer has float32 format.

I got it.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.