Description
Please check this github issue. No one respond me there so I’m asking again here. CUDNN_STATUS_BAD_PARAM when infer with dynamic shape · Issue #1281 · NVIDIA/TensorRT · GitHub
Environment
TensorRT Version :
GPU Type :
Nvidia Driver Version :
CUDA Version :
CUDNN Version :
Operating System + Version :
Python Version (if applicable) :
TensorFlow Version (if applicable) :
PyTorch Version (if applicable) :
Baremetal or Container (if container which image + tag) :
Relevant Files
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
Steps To Reproduce
Please include:
Exact steps/commands to build your repro
Exact steps/commands to run your repro
Full traceback of errors encountered
Hi @ysyyork ,
We recommend you to please share complete error logs and issue reproducible model/scripts for better assistance.
Thank you.
I shared everything in the github link. I have been reposting this for 3 times…
Hello, is there any updates on this?
@ysyyork ,
In github link you’ve shared, we are unable to find ONNX model and complete error logs.
Looks like you shared TRT engine, We need to build the engine on machine which we want to run inference. This is because TensorRT optimizes the graph using the available GPUs, thus engine is platform specific and not potable across different platforms.
So we recommend you to try on latest TensorRT version and if you still face this issue, please share ONNX model and trtexec command you followed to generate engine as inference script is already available in the git link. We would like to try reproducing the error from our end for better assistance.
Hope following link may help you,
opened 09:23AM - 05 Nov 20 UTC
closed 06:34AM - 21 May 21 UTC
Component: ONNX
Topic: Dynamic Shape
triaged
## Description
When Inferencing image with dynamic shape, some images work well… , while some images throw out this error:
[TensorRT] ERROR: ../rtSafe/cuda/cudaConvolutionRunner.cpp (362) - Cudnn Error in execute: 3 (CUDNN_STATUS_BAD_PARAM)
[TensorRT] ERROR: FAILED_EXECUTION: std::exception
I have trained a model by keras, and transfor h5 to the onnx tpye using keras2onnx library. Then I use build_trt.py to produce trt engine. However, when I use test.py to inference, some images work well, and some don't. In my example, right.jpg works well, error.jpg don't. However, when I set the fixed shape in build_trt.py, both images work well. I can't finger out why this happen.
I did more test, and find some shapes work well, but some shapes don't. All shapes works well in keras model and onnx model
## Environment
**TensorRT Version**: 7.0.0.11
**GPU Type**: RTX 2080 TI
**Nvidia Driver Version**: 430.40
**CUDA Version**: 10.0
**CUDNN Version**: 7.6.4
**Operating System + Version**: Ubuntu 18.04
**Python Version (if applicable)**: 3.6.9
**TensorFlow Version (if applicable)**: 1.14.0
**PyTorch Version (if applicable)**: Not Used
**Keras Version**: 2.3.0
**Baremetal or Container (if container which image + tag)**:
## Relevant Files
https://drive.google.com/file/d/1PrMEWOnsocAtPXupsd1KGFnsHGHTCdsC/view?usp=sharing
## Steps To Reproduce
1. Decompression test.trt.zip
2. run the build_engine in build_trt.py to produce trt engine or just use the produced crnn_engine.trt
3. run test.py
Thank you.