Source info not found for source 2. Maybe the GST_NVEVENT_PAD_ADDED event was never generated for the source

0:01:08.378639504 23885 0x3148c980 WARN nvinfer gstnvinfer.cpp:1705:gst_nvinfer_process_objects: Source info not found for source 2. Maybe the GST_NVEVENT_PAD_ADDED event was never generated for the source.

0:01:08.394033843 23885 0x3148caa0 WARN nvinfer gstnvinfer.cpp:1705:gst_nvinfer_process_objects: Source info not found for source 1. Maybe the GST_NVEVENT_PAD_ADDED event was never generated for the source.
0:01:08.394058108 23885 0x3148c8c0 WARN nvinfer gstnvinfer.cpp:1705:gst_nvinfer_process_objects: Source info not found for source 1. Maybe the GST_NVEVENT_PAD_ADDED event was never generated for the source.
0:01:08.413693848 23885 0x22bc08c0 WARN nvinfer gstnvinfer.cpp:1705:gst_nvinfer_process_objects: Source info not found for source 2. Maybe the GST_NVEVENT_PAD_ADDED event was never generated for the source.
0:01:08.413718584 23885 0x22bc0800 WARN nvinfer gstnvinfer.cpp:1705:gst_nvinfer_process_objects: Source info not found for source 2. Maybe the GST_NVEVENT_PAD_ADDED event was never generated for the source.
0:01:08.420194007 23885 0x3148c980 WARN nvinfer gstnvinfer.cpp:1705:gst_nvinfer_process_objects: Source info not found for source 2. Maybe the GST_NVEVENT_PAD_ADDED event was never generated for the source.
0:01:08.421480022 23885 0x3148caa0 WARN nvinfer gstnvinfer.cpp:1705:gst_nvinfer_process_objects: Source info not found for source 1. Maybe the GST_NVEVENT_PAD_ADDED event was never generated for the source.
0:01:08.421515776 23885 0x3148c8c0 WARN nvinfer gstnvinfer.cpp:1705:gst_nvinfer_process_objects: Source info not found for source 1. Maybe the GST_NVEVENT_PAD_ADDED event was never generated for the source.

Hi, I am trying to run the deepstream parallel inference using python, but when I try to run three PGIE branches .
I am getting the above error.

Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
• The pipeline being used

• Hardware Platform GPU
• DeepStream Version -----> 6.2
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.5
• NVIDIA GPU Driver Version 535
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
• The pipeline being used

@yingliu, No reply from your side.

Sorry for the late reply.

Can you share your configuration files and sample python code ?

I will try to reproduce

I have a couple of configuration files!
I am trying to implement the deepstream-parallel-inference using python.

This warning usually means no objects added in the meta by upstream components.

Can you reproduce this problem if you use the deepstream-parallel-inference sample

I can’t confirm whether it is your configuration problem or the python code problem

Lets consider I have A, B, C, D PGIE’s.
when I try to use A, and B, I don’t encounter this. But when I use C, D I am getting this warning.
Okay, let me share my configs.


This is my pipeline

Please find my config file in this zip and let me know if you able to fix this problem ASAP.

nvidia_forum_config.zip (8.3 KB)

@junshengy @yingliu Did you find any solution ? or Any update ?

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

I have tried modifying configuration file to run four-way parallel inferences using deepstream_parallel_infer_app as your description. It can work normal.

From the logs you provided, I guess the problem is that streammux is not successfully linked with nvinfer

I can help you if you share the sample code.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.