I'm trying to start deepstream lpr app on deepstream 7.1, however it keeps resulting in segmentation fault

Hardware Platform Jetson
JetPack Version Jetpack 6.2
DeepStream Version Deepstream 7.1
TensorRT Version TensorRT 10.3
CUDA 12.6
cuDNN 9.3

I’m trying to start deepstream lpr app on deepstream 7.1, however it keeps resulting in segmentation fault. What could be the issue?

Is the parser causing the issue?

2. LPRNet question
./deepstream-lpr-app lpr_app_infer_us_config.yml
use_nvinfer_server:0, use_triton_grpc:0
Request sink_0 pad from streammux
Unknown or legacy key specified ‘is-classifier’ for group property
!! [WARNING] Unknown param found for nvtracker: enable-batch-process
set analy config
!! [WARNING] Unknown param found : type
!! [WARNING] Unknown param found : enc
!! [WARNING] Unknown param found : filename
Now playing: lpr_app_infer_us_config.yml
Opening in BLOCKING MODE
Opening in BLOCKING MODE
Setting min object dimensions as 16x16 instead of 1x1 to support VIC compute mode.
0:00:00.669840708 91125 0xaaab0e351590 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger: NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 3]: deserialized trt engine from :/home/jetson/Documents/deepstream_lpr_app/models/LP/LPR/us_lprnet_baseline18_deployable.etlt_b16_gpu0_fp16.engine
INFO: [FullDims Engine Info]: layers num: 3
0 INPUT kFLOAT image_input 3x48x96 min: 1x3x48x96 opt: 16x3x48x96 Max: 16x3x48x96
1 OUTPUT kINT64 tf_op_layer_ArgMax 24 min: 0 opt: 0 Max: 0
2 OUTPUT kFLOAT tf_op_layer_Max 24 min: 0 opt: 0 Max: 0

0:00:00.669999622 91125 0xaaab0e351590 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger: NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 3]: Use deserialized engine model: /home/jetson/Documents/deepstream_lpr_app/models/LP/LPR/us_lprnet_baseline18_deployable.etlt_b16_gpu0_fp16.engine
0:00:00.680549661 91125 0xaaab0e351590 INFO nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus: [UID 3]: Load new model:/home/jetson/Documents/deepstream_lpr_app/deepstream-lpr-app/lpr_config_sgie_us.yml sucessfully
Setting min object dimensions as 16x16 instead of 1x1 to support VIC compute mode.
WARNING: Deserialize engine failed because file path: /home/jetson/Documents/deepstream_lpr_app/deepstream-lpr-app/../models/tao_pretrained_models/yolov4-tiny/yolov4_tiny_usa_deployable.etlt_b16_gpu0_int8.engine open error
0:00:00.694784916 91125 0xaaab0e351590 WARN nvinfer gstnvinfer.cpp:681:gst_nvinfer_logger: NvDsInferContext[UID 2]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2080> [UID = 2]: deserialize engine from file :/home/jetson/Documents/deepstream_lpr_app/deepstream-lpr-app/../models/tao_pretrained_models/yolov4-tiny/yolov4_tiny_usa_deployable.etlt_b16_gpu0_int8.engine failed
0:00:00.694814165 91125 0xaaab0e351590 WARN nvinfer gstnvinfer.cpp:681:gst_nvinfer_logger: NvDsInferContext[UID 2]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2185> [UID = 2]: deserialize backend context from engine from file :/home/jetson/Documents/deepstream_lpr_app/deepstream-lpr-app/../models/tao_pretrained_models/yolov4-tiny/yolov4_tiny_usa_deployable.etlt_b16_gpu0_int8.engine failed, try rebuild
0:00:00.694827701 91125 0xaaab0e351590 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger: NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2106> [UID = 2]: Trying to create engine from model files
WARNING: [TRT]: onnxOpImporters.cpp:6117: Attribute caffeSemantics not found in plugin node! Ensure that the plugin creator has a default value defined or the engine may fail to build.
WARNING: [TRT]: BatchedNMSPlugin is deprecated since TensorRT 9.0. Use INetworkDefinition::addNMS() to add an INMSLayer OR use EfficientNMS plugin.
WARNING: [TRT]: DLA requests all profiles have same min, max, and opt value. All dla layers are falling back to GPU
WARNING: [TRT]: Calibration Profile is not defined. Calibrating with Profile 0
ERROR: [TRT]: Unexpected exception _Map_base::at
Segmentation fault (core dumped)

To compile parser I had to remove lnvparsers from Makefile line 28 like this:
LIBS:= -lnvinfer

It appears you are using an legacy version of deepstream-lpr-app. The lpr app no ​​longer uses YOLOv4 models; YOLOv4 is no longer supported in deepstream-7.1.

please refer to this link.