Cannot deploy YOLO from TLT Object detection on AGX Xavier

Hi, I successfully trained, pruned, retrained, and exported TLT Yolo model, and tried to deploy it on AGX Xavier. To the best of my knowledge, we cannot utilize Deepstream for Yolo, thus I tried TensorRT. However, it shows me the error message below and not able to run an inference, Could you let me know what to do to solve this problem? or provide a nice guideline explaining how to deploy TLT yolo on Xavier? Thank you so much in advance.

Hi,

Deepstream do support YOLO-based model.
And you will need to use it for the TLT format parsing.

Please find the TLT+Deepstream sample below:

Thanks.

the error message I attached is the result after following the instruction you provided.
The github repo you gave is what I have done. Could you tell me what could be the possible reason?
Thank you.

Hi,

Serialization Error in verifyHeader

This error indicates that you are using the different TensorRT version between serializing and de-serializing.

Please noticed that TensorRT engine cannot be used cross platform or cross version.
You will need to use the same TensorRT version for de-serialization.

Thanks.