Output data is all 0, I do not know why it is

Description

I’m using an example in “tensorrt-utils_rmccorm4/inference”. I followed README, but output data is all 0. But, the README shows that values should be some numbers

My output is as follows,

(venv_tensorrt8) inference:$ python3 infer.py -e alexnet_fixed.engine
[12/20/2022-15:15:07] [TRT] [W] TensorRT was linked against cuBLAS/cuBLASLt 11.6.5 but loaded cuBLAS/cuBLASLt 11.5.1
Loaded engine: alexnet_fixed.engine
[12/20/2022-15:15:08] [TRT] [W] TensorRT was linked against cuBLAS/cuBLASLt 11.6.5 but loaded cuBLAS/cuBLASLt 11.5.1
Active Optimization Profile: 0
Engine/Binding Metadata
        Number of optimization profiles: 1
        Number of bindings per profile: 2
        First binding for profile 0: 0
        Last binding for profile 0: 1
Generating Random Inputs
        Using random seed: 42
        Input [actual_input_1] shape: (10, 3, 224, 224)
Input Metadata
        Number of Inputs: 1
        Input Bindings for Profile 0: [0]
        Input names: ['actual_input_1']
        Input shapes: [(10, 3, 224, 224)]
Output Metadata
        Number of Outputs: 1
        Output names: ['output1']
        Output shapes: [(10, 1000)]
        Output Bindings for Profile 0: [1]
idling ...
elapsed time = 0.15895748138427734
Inference Outputs:
1 (10, 1000)
[[0. 0. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 0. 0. 0.]
 ...
 [0. 0. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 0. 0. 0.]]
(venv_tensorrt8) inference:$

Environment

nvidia-cublas-cu11 11.10.3.66
nvidia-cublas-cu117 11.10.1.25
nvidia-cuda-nvrtc-cu11 11.7.99
nvidia-cuda-runtime-cu11 11.7.99
nvidia-cuda-runtime-cu117 11.7.60
nvidia-cudnn-cu11 8.5.0.96
nvidia-cudnn-cu116 8.4.0.27
nvidia-tensorrt 8.2.5.1

torch 1.11.0+cu113
torchaudio 0.11.0+cu113
torchvision 0.12.0+cu113

TensorRT Version:
GPU Type:
Nvidia Driver Version:
CUDA Version:
CUDNN Version:
Operating System + Version:
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Hi,

We recommend you to please use latest Nvidia official samples with latest TensorRT version 8.5

Thank you.

Seems it works except inference.

I guess [TRT] [W] TensorRT was linked against cuBLAS/cuBLASLt 11.6.5 but loaded cuBLAS/cuBLASLt 11.5.1 suggest some issues that all output become 0.

[12/20/2022-18:16:54] [W] [TRT] TensorRT was linked against cuBLAS/cuBLASLt 11.6.5 but loaded cuBLAS/cuBLASLt 11.5.1
[12/20/2022-18:16:54] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +8, now: CPU 2757, GPU 2224 (MiB)
[12/20/2022-18:16:54] [I] [TRT] [MemUsageChange] Init cuDNN: CPU +0, GPU +8, now: CPU 2757, GPU 2232 (MiB)
[12/20/2022-18:16:54] [I] [TRT] [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +3, now: CPU 0, GPU 4 (MiB)
i

Is this possible cause?

Hi,

Above may not be the source of your problem. Could you please try on the latest TensorRT version with the latest samples. If you still face this issue, please share with us the minimal issue repro ONNX model, and script with us for better debugging.

Thank you.