How to extract three layers of EfficientNet from tensorrt

Description

A clear and concise description of the bug or issue.

Environment

TensorRT Version: 7.2.2.1
GPU Type: host rtx3090, target tx2
Nvidia Driver Version: 455.32
CUDA Version: 11.1
CUDNN Version: 8.0.5.43
Operating System + Version: host ubuntu1804 target tx2 (jetpack4.4)
Python Version (if applicable): 3.8.5
TensorFlow Version (if applicable):
PyTorch Version (if applicable): 1.9.0+cu111
Baremetal or Container (if container which image + tag): container nvcr.io/nvidia/tensorrt:20.12-py3

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Hi, I’m going to convert the EfficientNet model to torch, onnx, and trt order, and I’am going to construct the application using the values of the three middle layers of the EfficientNet model.

By the way, [Get intermediate layer output] according to the link above.

The link shows that the input output is fixed in the trt( To some extent, through tutorials, I aware…), so the value of the intermediate layer cannot be obtained. But if I need 3 layers, according to the above fact, should I make 3 trt models? Or is there a proper example? Thank you

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.

In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

There is no problem creating a trt model! However, what I want is three layers of output value, so I wonder if I need to make three trt models. Or I wonder if there’s any way to tie it.

@yeongjae8066,

You need not to create three trt models. All outputs can be specified in comma separated list in trtexec --output=.

Thank you for your answer. Should I simply put the onnx layer name in the output in the form of a list? When converting to onnx, I don’t have to do it as usual? Thank you

@yeongjae8066,

Sorry, --output may not work for this. Please use ONNX graphsurgeon to add more tensors to graph output list. Since ONNX already records which tensors are the outputs, TRT just use that information.

Thank you. I will try it!

1 Like