Version tag does not match

I am trying to create a trt engine on my pc and then run it on my xavier agx. running “dpkg -l | grep tensorrt” on my pc, I find :
nv-tensorrt-repo-ubuntu1804-cuda10.0-trt6.0.1.5-ga-20190913
on my xavier agx, I find:
6.0.1.10-1+cuda10.0

but when I try to run my trt engine on the xavier, it tells me serialization error in verify header: 0 (version tag does not match).
what am I doing wrong?

the full error is:
~/tensorrt_demos$ python3 trt_yolov3.py --model yolov3-608 --vid 0 --usb
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (933) open OpenCV | GStreamer warning: Cannot query video position: status=0, value=-1, duration=-1
[TensorRT] ERROR: INVALID_CONFIG: The engine plan file is generated on an incompatible device, expecting compute 7.2 got compute 7.5, please rebuild.
[TensorRT] ERROR: engine.cpp (1324) - Serialization Error in deserialize: 0 (Core engine deserialization failure)
[TensorRT] ERROR: INVALID_STATE: std::exception
[TensorRT] ERROR: INVALID_CONFIG: Deserialize the cuda engine failed.
Traceback (most recent call last):
File “trt_yolov3.py”, line 112, in
main()
File “trt_yolov3.py”, line 98, in main
trt_yolov3 = TrtYOLOv3(args.model, (h, w), args.category_num)
File “/home/xavier/tensorrt_demos/utils/yolov3.py”, line 447, in init
self.context = self._create_context()
File “/home/xavier/tensorrt_demos/utils/yolov3.py”, line 399, in _create_context
return self.engine.create_execution_context()
AttributeError: ‘NoneType’ object has no attribute ‘create_execution_context’
Exception ignored in: <bound method TrtYOLOv3.del of <utils.yolov3.TrtYOLOv3 object at 0x7f7f840fd0>>
Traceback (most recent call last):
File “/home/xavier/tensorrt_demos/utils/yolov3.py”, line 455, in del
del self.stream
AttributeError: stream

I just meet the same problem today when i update my jetson xavier nx system form jetpack 4.4DP to jetpack 4.4 by OTA, After update I try to run tensor rt demo but it reports like this:

/usr/bin/python3.6 /home/jetson/PycharmProjects/Bottle_Check_Test/Detect.py
initing YOLOv3 TrT Engine…
[TensorRT] ERROR: coreReadArchive.cpp (38) - Serialization Error in verifyHeader: 0 (Version tag does not match)
[TensorRT] ERROR: INVALID_STATE: std::exception
[TensorRT] ERROR: INVALID_CONFIG: Deserialize the cuda engine failed.
Traceback (most recent call last):
File “/home/jetson/PycharmProjects/Bottle_Check_Test/Detect.py”, line 87, in
main()
File “/home/jetson/PycharmProjects/Bottle_Check_Test/Detect.py”, line 72, in main
trt_yolov3 = TrtYOLOv3(args.model, (yolo_dim, yolo_dim)) #init yolov3
File “/home/jetson/PycharmProjects/Bottle_Check_Test/utils/yolov3.py”, line 443, in init
self.context = self._create_context()
File “/home/jetson/PycharmProjects/Bottle_Check_Test/utils/yolov3.py”, line 399, in _create_context
return self.engine.create_execution_context()
AttributeError: ‘NoneType’ object has no attribute ‘create_execution_context’
Exception ignored in: <bound method TrtYOLOv3.del of <utils.yolov3.TrtYOLOv3 object at 0x7f9eca98d0>>
Traceback (most recent call last):
File “/home/jetson/PycharmProjects/Bottle_Check_Test/utils/yolov3.py”, line 451, in del
del self.stream
AttributeError: stream

Process finished with exit code 1

my toolkit version is:
deepstream-app version 5.0.0
DeepStreamSDK 5.0.0
CUDA Driver Version: 10.2
CUDA Runtime Version: 10.2
TensorRT Version: 7.1
cuDNN Version: 8.0
libNVWarp360 Version: 2.0.1d3
python3.6

For some reason, I was thinking that I could move trt model over. It doesn’t seem like that is actually possible. This mostly worked one I started creating the trt models on the xavier