Tensorrt renames output layers

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson AGX Orin
• DeepStream Version 6.3
• JetPack Version (valid for Jetson only) 5.1.2

The Pytorch code that is running a Tensorrt engine for the same model has some names for the heads at the output layer. When I try to profile the TensorRT engine using trtexec, I see generic names like output_0, output_1 and output_2. Why is this happening? Am I missing something?

Thank you!

You can use INetworkDefinition::setWeightsName() to name weights at build time - the ONNX parser uses this API to associate the weights with the names used in the ONNX model. Otherwise, TensorRT will name the weights internally based on the related layer names and weight roles.

https://docs.nvidia.com/deeplearning/tensorrt/latest/inference-library/advanced.html

Thank you for your response.
Is there a way I can do the same with torch2trt()?

Hi,

Yes, the output_%d is defined in the function below:

https://github.com/NVIDIA-AI-IOT/torch2trt/blob/master/torch2trt/torch2trt.py#L303

def default_output_names(num_outputs):
    return ["output_%d" % i for i in range(num_outputs)]

But this function is called when there is no output pre-defined.
Please try to feed the expected output name when calling the torch2trt.
(ex. output_names=['output_0','output_1',output_2])

https://github.com/NVIDIA-AI-IOT/torch2trt/blob/master/torch2trt/torch2trt.py#L518

def torch2trt(module,
              inputs,
              input_names=None,
              output_names=None,
              log_level=trt.Logger.ERROR,
              fp16_mode=False,
              max_workspace_size=1<<25,
              strict_type_constraints=False,
              ...

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.