DeepStream5.1 hangs and silently crashes during engine build

Driver Version: 460.32.03、
CUDA Version: 11.1.105、
TensorRT7.2.3.4、
Cudnn 8.1.1.33-1+cuda11.2

DeepStream5.1 hangs and silently crashes during engine build from ONNX model file.

Any error log or screenshot as reference?

Unknown or legacy key specified 'is-classifier' for group [property]
Now playing: /home/user/Downloads/2s.mp4
ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: INVALID_CONFIG: The engine plan file is not compatible with this version of TensorRT, expecting library version 7.2.3 got 7.2.1, please rebuild.
ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: engine.cpp (1646) - Serialization Error in deserialize: 0 (Core engine deserialization failure)
ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: INVALID_STATE: std::exception
ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: INVALID_CONFIG: Deserialize the cuda engine failed.
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:1567 Deserialize engine failed from file: /home/user/net/1_2.onnx_b1_gpu0_fp32.engine
0:00:09.151475257  4799 0x555e91234b00 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary-nvinference-engine1> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1691> [UID = 1]: deserialize engine from file :/home/user/net/1_2.onnx_b1_gpu0_fp32.engine failed
0:00:09.151515473  4799 0x555e91234b00 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<primary-nvinference-engine1> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1798> [UID = 1]: deserialize backend context from engine from file :/home/user/net/1_2.onnx_b1_gpu0_fp32.engine failed, try rebuild
0:00:09.151521374  4799 0x555e91234b00 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-nvinference-engine1> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 1]: Trying to create engine from model files
----------------------------------------------------------------
Input filename:   /home/user/net/1_2.onnx
ONNX IR version:  0.0.6
Opset version:    12
Producer name:    pytorch
Producer version: 1.8
Domain:
Model version:    0
Doc string:
----------------------------------------------------------------

are you passing a TRT engine to DS? Is the TRT engine built on this platform?

I am passing ONNX. There were some left over .engine files but I deleted them later but the problem is the same, crashes after printing ONNX info.

The same error by passing onnx?

Yes. Except the starting few lines about .engine files. Crashes after printing ONNX IR version: 0.0.6
Opset version: 12
Producer name: pytorch
Producer version: 1.8
Domain:
Model version: 0
Doc string:
----------------------------------------------------------------

There is not ONNX related failure log in above log, it’s hard to tell what the error with your onnx.

Could you use tensorrt tool - trtexec to run your onnx file, .e.g

$ trtexec --onnx=file.onnx

and share the log

Understood, will get back to you. The machine is an RTX laptop btw.

シャハリアル

Faced the same issue on desktop but it happens only sometimes.
trtexec works with out issue but I would like to use DeepStream to convert directly.

This issue was not there with DS5.0.1 and CUDA 10.2

can you sahre a reproduce?

Yes, will try to send it within tomorrow

シャハリアル

I’ve got to wade through a lot of redtape to share internal company stuff. Unfortunately, not able to share at this time.

Ok, so, sorry, in previous log you shared, there is not any onnx related error log, I’m afraid we can do nothing for this.