Serialization Error in verifyHeader: 0 (Version tag does not match) while running deepstream-app with YOLOv4

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Tesla T4
• DeepStream Version 5.0
• TensorRT Version 8.0.0-1
• NVIDIA GPU Driver Version (valid for GPU only) 11.3

I am following the steps given in GitHub - NVIDIA-AI-IOT/yolov4_deepstream to create a tensorrt engine file, and then I use it to run deepstream-app. However, I get the following error:

Based on this topic and a few other topics, this is due to a mismatch between tensorrt version used for creating the engine and the one for deserializing it. However, I am creating the engine on the same machine and I assume that the same version of tensorrt should be used for both task.
Do you know what can possibly cause this problem?

Thanks

Did you build the engine and deploy it to deepstream in the same environments? did you run the ds app in docker or in the host?

@Amycao Yes, I built the engine and deployed it into deepstream in the same environment. Also, the ds is running in the host as well.

Did you build the engine follow this Tianxiaomo/pytorch-YOLOv4: PyTorch ,ONNX and TensorRT implementation of YOLOv4 (github.com)? or this, yolov4_deepstream/tensorrt_yolov4 at master · NVIDIA-AI-IOT/yolov4_deepstream (github.com)

Hi @Amycao

I am following the first one, i.e., GitHub - Tianxiaomo/pytorch-YOLOv4: PyTorch ,ONNX and TensorRT implementation of YOLOv4

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

• TensorRT Version 8.0.0-1

From the description, you used TRT 8.0.0, for deepstream 5.0, you should use TensorRT 7.0.X

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.