Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
GTX 1060
**• Container Nvidia/pytorch:21:05
• Issue Type( questions, new requirements, bugs)
Bugs • How to reproduce the issue? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
I use tensorRT to converting model SlowFast (use RESTNET 3D Convolution) from onnx to engine but when I check output with onnx and trt note same. with engine my output = 0,0,0,0,0. What happened?? is it because trt doesn’t support 3D Conv??
Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:
validating your model with the below snippet
check_model.py
import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command. https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!
Hi. Sorry I didn’t make it clear. I use infer.py for get output of trt, do not use infer.cpp. Please refer to infer.py, extensions.cpp and engine.cpp files. Thank you very much.
This is my script. cpp - Google Drive
You can refer engine.cpp, export.cpp and engine.h in this folder. I also use trtexec for generate TRT engine but the result don’t change.
Sorry for the delay in response. We could not successfully run the issue repro script with libnvinfer.so error (as your code uses libnvinfer7). Looks like you’re using an old version of the container (old version of TensorRT).
We recommend you to please latest TensorRT version, and please let us know if you still face this issue and share us issue repro. https://ngc.nvidia.com/containers/nvidia:tensorrt
I also encountered the same problem, when I infer a 3DCNN type model, the output is all 0,0,0,0… and the accompanying error: [TensorRT] ERROR: Parameter check failed at: engine.cpp::enqueue::451, condition: bindings != nullptr
How did you solve it? , I tried tensorrt7.2 and tensorrt8.2 got the same error