Provide details on the platforms you are using:
Linux distro and version
GPU type
Nvidia driver version
CUDA version
CUDNN version
Python version [if using python]
Tensorflow version
TensorRT version
If Jetson, OS, hw versions
Describe the problem
Files
Include any logs, source, models (.uff, .pb, etc.) that would be helpful to diagnose the problem.
If relevant, please include the full traceback.
Reproducibility
***Please provide a minimal test case that reproduces your error.
trt engine using onnx2trt (set batch size=2) ->memory copy two input images -> execute -> output(batch0: OK, batch1: all zero)
In 5.1.6 version, it works. But after update to 6.0.1 version, It’s not work even with the same code. (I changed only one thing (execute(batch_size, buffer) -> executeV2(buffer)))
So, I want to know if something has change in setting batch size.
I have exactly the same problem using tensorRT 7, previously using tensorRT 5 it worked perfectly.
Currently making changes to executeV2, adding explicit batch, etc … It does not work, the first element is predicted well but the rest… is done wrong (object detection in my case).
In the past I was checked the order of the output that is NCHW, it has not changed, right?
Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:
validating your model with the below snippet
check_model.py
import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!