Description
A clear and concise description of the bug or issue.
Environment
TensorRT Version: 7.2.2.1
GPU Type: host rtx3090, target tx2
Nvidia Driver Version: 455.32
CUDA Version: 11.1
CUDNN Version: 8.0.5.43
Operating System + Version: host ubuntu1804 target tx2 (jetpack4.4)
Python Version (if applicable): 3.8.5
TensorFlow Version (if applicable):
PyTorch Version (if applicable): 1.9.0+cu111
Baremetal or Container (if container which image + tag): container nvcr.io/nvidia/tensorrt:20.12-py3
Relevant Files
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
Steps To Reproduce
Please include:
- Exact steps/commands to build your repro
- Exact steps/commands to run your repro
- Full traceback of errors encountered
Hi, I’m going to convert the EfficientNet model to torch, onnx, and trt order, and I’am going to construct the application using the values of the three middle layers of the EfficientNet model.
By the way, [Get intermediate layer output] according to the link above.
The link shows that the input output is fixed in the trt( To some extent, through tutorials, I aware…), so the value of the intermediate layer cannot be obtained. But if I need 3 layers, according to the above fact, should I make 3 trt models? Or is there a proper example? Thank you