Can 3.0 deepstream support 5.0.2.6 tennsorrt engine?

We can’t load our own *.TRT engine files into deepstream. We want to ask if 3.0 deepstream can support 5.0.2.6 engine files.
I use tenosrrt5.0 to generate a yolov3.trt in python, but not work in deepstream3.0

Hi,

Deepstream 3.0 is verified with JetPack4.1.1, which is 5.0.3.

May I know what kind of error do you meet?
Suppose the engine generated from TensorRT 5.0 should work fine with Deepstream3.0.

Please noticed that TensorRT engine cannot be used cross-platform.
You will need to generate the engine directly from the Xavier.

Thanks.

python==3.5.6;tensorrt==5.0.2.6(or)5.1.5.0;code==

(exclude front)
print(‘build_cuda_engine…’)
t = time()
with open(‘my_trt.trt’, ‘rb’) as f, trt.Runtime(logger) as runtime:
engine = runtime.deserialize_cuda_engine(f.read())
t = time() - t
print(‘finish! cost(seconds):’, t)
print(‘build_cuda_engine…’)
t = time()
with open(‘resnet10.caffemodel_b4_int8.engine’, ‘rb’) as f, trt.Runtime(logger) as runtime:
engine = runtime.deserialize_cuda_engine(f.read())
t = time() - t
print(‘finish! cost(seconds):’, t)
(end)

terminal==
(exclude front)
build_cuda_engine…
[TensorRT] WARNING: TensorRT was compiled against cuDNN 7.5.0 but is linked against cuDNN 7.1.4
finish! cost(seconds): 2.6789138317108154
build_cuda_engine…

Process finished with exit code -1
(end)

‘resnet10.caffemodel_b4_int8.engine’ is generated by ‘resnet10.caffemodel’ from sample of deepstream==3.0
We don’t know the output shape of ‘resnet10.caffemodel’, we try to use our own model ‘yolov3.trt’

This is our ‘config_infer_primary.txt’:
(exclude front)
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373

#model-file=…/…/models/Primary_Detector/resnet10.caffemodel
#proto-file=…/…/models/Primary_Detector/resnet10.prototxt
#model-engine-file=…/…/models/Primary_Detector/resnet10.caffemodel_b30_int8.engine
model-engine-file=…/…/models/Primary_Detector/my_trt.trt
(exclude bottom)

terminal==
(deepstream-app:11181): GStreamer-CRITICAL **: 17:04:04.411: passed ‘0’ as denominator for `GstFraction’

Using TRT model serialized engine /home/linux/tool/deepstream/deepstream/DeepStreamSDK-Tesla-v3.0/DeepStream_Release/samples/configs/deepstream-app/…/…/models/Primary_Detector/my_trt.trt crypto flags(0)
The engine plan file is incompatible with this version of TensorRT, expecting 5.0.2.6got 5.1.5.0, please rebuild.

Warning. Cannot use engine file /home/linux/tool/deepstream/deepstream/DeepStreamSDK-Tesla-v3.0/DeepStream_Release/samples/configs/deepstream-app/…/…/models/Primary_Detector/my_trt.trt
Error. Cannot use engine file and model files not specified
** ERROR: main:564: Failed to set pipeline to PAUSED
Quitting
ERROR from primary_gie_classifier: Failed to initialize infer context
Debug info: gstnvinfer.c(2141): gst_nv_infer_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie_classifier
App run failed
[1] + Done /usr/bin/gdb --interpreter=mi --tty=${DbgTerm} 0</tmp/Microsoft-MIEngine-In-wjxop010.w2x 1>/tmp/Microsoft-MIEngine-Out-8l0g1nvo.3yx

Hi,

The issue is not from the TensorRT but cuDNN.
The model is broken due to cuDNN incompatibility.

[TensorRT] WARNING: TensorRT was compiled against cuDNN 7.5.0 but is linked against cuDNN 7.1.4

There are lots of dependencies among driver, CUDA, cuDNN and TensorRT.
Do you still have the caffemodel/prototxt files?

It’s recommended to recompile a TensorRT engine each time the environment/device/software is updated.
Thanks.

ok