Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) Tesla T4 • DeepStream Version 5.0 • TensorRT Version 8.0.0-1 • NVIDIA GPU Driver Version (valid for GPU only) 11.3
I am following the steps given in GitHub - NVIDIA-AI-IOT/yolov4_deepstream to create a tensorrt engine file, and then I use it to run deepstream-app. However, I get the following error:
Based on this topic and a few other topics, this is due to a mismatch between tensorrt version used for creating the engine and the one for deserializing it. However, I am creating the engine on the same machine and I assume that the same version of tensorrt should be used for both task.
Do you know what can possibly cause this problem?
There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks
• TensorRT Version 8.0.0-1
From the description, you used TRT 8.0.0, for deepstream 5.0, you should use TensorRT 7.0.X