Ouput of infer with trt not maching vs onnx and pytorch model (SlowFast use Retnet 3D Conv)

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)

GTX 1060

**• Container Nvidia/pytorch:21:05

• Issue Type( questions, new requirements, bugs)
Bugs
• How to reproduce the issue? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

I use tensorRT to converting model SlowFast (use RESTNET 3D Convolution) from onnx to engine but when I check output with onnx and trt note same. with engine my output = 0,0,0,0,0. What happened?? is it because trt doesn’t support 3D Conv??

Thanks you so much!!

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

thank you. this is my ONNX model. I will upload the script later
slow_fast1_athai.onnx (21.1 MB)

i used trtexec with “verbose” and the result is no different

this link is my script. cpp - Google Drive

Hi,

// // Get back the results
// unique_ptr<float[]> output(new float[10]);
// cudaMemcpy(output.get(), output_d, sizeof(float) * 10, cudaMemcpyDeviceToHost);

Could you please let us know reason for commenting this part of code in infer.cpp. Could you please try uncommenting it.

Thank you.

Hi. Sorry I didn’t make it clear. I use infer.py for get output of trt, do not use infer.cpp. Please refer to infer.py, extensions.cpp and engine.cpp files. Thank you very much.

Hi,

Could you please share ONNX model mentioned in the infer.py
We would like to reproduce the error from our end for better debugging.

Thank you.

Yes sir.

This is my onnx model. You can rename the file in infer.py to reproduce the bug.

Thank you very much.
slow_fast1_athai.onnx (21.1 MB)

Could you please let us know, how you generated TRT engine. Please let us know/share script or command.

Thank you.

1 Like

Thank you.

This is my script. cpp - Google Drive
You can refer engine.cpp, export.cpp and engine.h in this folder. I also use trtexec for generate TRT engine but the result don’t change.

Thank.

Hi,

Sorry for the delay in response. We could not successfully run the issue repro script with libnvinfer.so error (as your code uses libnvinfer7). Looks like you’re using an old version of the container (old version of TensorRT).
We recommend you to please latest TensorRT version, and please let us know if you still face this issue and share us issue repro.
https://ngc.nvidia.com/containers/nvidia:tensorrt

Thank you.

1 Like

Hi,

Thanks for your feedback, I will try the latest TensorRt version and notify you when the result is out.

Thank you very much.

I also encountered the same problem, when I infer a 3DCNN type model, the output is all 0,0,0,0… and the accompanying error: [TensorRT] ERROR: Parameter check failed at: engine.cpp::enqueue::451, condition: bindings != nullptr

How did you solve it? , I tried tensorrt7.2 and tensorrt8.2 got the same error

1 Like