Application running problem

**• Hardware Platform (Jetson / GPU) - Jetson AGX Xavier
**• DeepStream Version - 5.1
**• JetPack Version (valid for Jetson only) - 4.5

Hello, i am having problems with running my application. Here is the log:

Request sink_0 pad from streammux
Request sink_1 pad from streammux
Warn: ‘threshold’ parameter has been deprecated. Use ‘pre-cluster-threshold’ instead.
Warn: ‘threshold’ parameter has been deprecated. Use ‘pre-cluster-threshold’ instead.
Now playing: 1
Opening in BLOCKING MODE
Opening in BLOCKING MODE
Opening in BLOCKING MODE
Opening in BLOCKING MODE
0:00:03.102078128 15780 0x55b4661890 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1702> [UID = 3]: deserialized trt engine from :/home/forest/deepstream_lpr_app/model_b16.engine
INFO: [FullDims Engine Info]: layers num: 3
0 INPUT kFLOAT motion_input:0 3x72x72 min: 1x3x72x72 opt: 1x3x72x72 Max: 2x3x72x72
1 INPUT kFLOAT appearance_input:0 3x72x72 min: 1x3x72x72 opt: 1x3x72x72 Max: 2x3x72x72
2 OUTPUT kFLOAT lambda_1/Squeeze:0 0 min: 0 opt: 0 Max: 0

0:00:03.102348867 15780 0x55b4661890 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 3]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1806> [UID = 3]: Use deserialized engine model: /home/forest/deepstream_lpr_app/model_b16.engine
0:00:03.104003410 15780 0x55b4661890 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 3]: Error in NvDsInferContextImpl::allocateBuffers() <nvdsinfer_context_impl.cpp:1320> [UID = 3]: Failed to allocate cuda output buffer during context initialization
0:00:03.104047989 15780 0x55b4661890 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 3]: Error in NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1170> [UID = 3]: Failed to allocate buffers
0:00:03.107435705 15780 0x55b4661890 WARN nvinfer gstnvinfer.cpp:812:gst_nvinfer_start: error: Failed to create NvDsInferContext instance
0:00:03.107499486 15780 0x55b4661890 WARN nvinfer gstnvinfer.cpp:812:gst_nvinfer_start: error: Config file path: /home/forest/deepstream_lpr_app/deepstream-lpr-app/HR_sec_config.txt, NvDsInfer Error: NVDSINFER_CUDA_ERROR
Running…
ERROR from element secondary-infer-engine3: Failed to create NvDsInferContext instance
Error details: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(812): gst_nvinfer_start (): /GstPipeline:pipeline/GstNvInfer:secondary-infer-engine3:
Config file path: /home/forest/deepstream_lpr_app/deepstream-lpr-app/HR_sec_config.txt, NvDsInfer Error: NVDSINFER_CUDA_ERROR
Returned, stopping playback
Average fps 0.000233
Deleting pipeline

nvinfer_custom_lpr_parser.cpp (8.4 KB)
deepstream_lpr_app.c (23.8 KB)
pgie_config_fd_lpd.txt (3.9 KB)
HR_sec_config.txt (400 Bytes)

What’s the exact problem here? Does the problem only occur after applying your change?

The problem is that it can’t allocate the memory. i don’t know exactly but i think, that output is not correct. And yes the problem occur after my change

Hi @dgodomg ,
DeepStream nvinfer plugin only supports one input network for now.