Probe returns mismatched source id of video feeds

• Jetson nx
• DeepStream 6.0
• JetPack 4.6
• Issue Type: possibly a bug

Good day,

My colleague and I have a setup where we perform inference on detected objects from rtsp video feeds. I’d like some help with an issue that arises. It is similar to the one attached below, where the probe from which we retrieve frame meta information returns source id zero (pertaining to the first feed) each time it is triggered by an object detection in one of the feeds, instead of the source id belonging to the correct triggering feed. I’d like to know if there is something that can be done about this. The Nvidia correspondent in the link says it was a bug in DeepStream 5.0; we are running 6.0.

I’d appreciate help from anybody that can shed light on this. Thanks!

thanks for your sharing, what is the media pipeline? which sample are you testing? could you provide simplified code to reproduce this issue?

Hello @fanzh
I’m not sure I know what you mean by what the media pipeline is - do you mean the type of information it is streaming? If so, it receives rtsp video feeds, most of which are live, and some are recorded for testing purposes.

We are not running a sample, and yes I can share simplified code. When creating the main pipeline, after creating and linking elements, we create the probe on the tracker element:

	/* Set the state of the pipeline bin to paused */
	gst_element_set_state (pipeline, GST_STATE_PAUSED); 
	
	/* Create a dataflow src pad probe on the tracker*/
	tracker_src_pad = gst_element_get_static_pad (tracker, "src");
	if (!tracker_src_pad){
		msg = "Unable to get tracker src pad probe\n";
		outlog(&msg);
		return -1;
	}
	else
	{
		gst_pad_add_probe (tracker_src_pad, GST_PAD_PROBE_TYPE_BUFFER,
			Tracker::tracker_src_pad_buffer_probe, &rtsp_feed_vector, NULL);
		msg = "Added tracker src pad probe\n";
		outlog(&msg);
	}
	
		
	/* Set the state of the pipeline to playing */
	msg ="Setting pipeline to playing state...\n";
	outlog(&msg);
	if (gst_element_set_state (pipeline,
			GST_STATE_PLAYING) == GST_STATE_CHANGE_FAILURE) {
		msg = "Failed to set pipeline to playing --> Exiting\n";
		outlog(&msg);
		return -1;
	}

	/* Run the main loop */
	g_main_loop_run (loop);

As for the buffer probe function:

GstPadProbeReturn Analytics::Tracker::tracker_src_pad_buffer_probe(GstPad * pad, 
	GstPadProbeInfo * info, gpointer u_data){

	/* Receive the rtsp_feed_vector as part of the probe callback function */
	std::vector<Feed *> rtsp_feed_vector = *(std::vector<Feed *> *)(u_data);

	/* A batch of metadata in a buffer */
    GstBuffer *buf = (GstBuffer *) info->data;

	/* Initialise a pointer to an empty metadata list that will store the 
	* metadata associated every frame in a batch of frames*/
    NvDsMetaList * frame = NULL; 

	/* Initialise a pointer to an empty metadata list that will store the 
	* metadata associated every object detected in a single frame */
    NvDsMetaList * obj = NULL; 

	/* Holds information of object metadata in the frame */
    NvDsObjectMeta *nvds_obj_meta = NULL; 


	/* return a pointer to the NvDsBatchMeta structure which contains the meta
	* data associated with a batch of frames */
    NvDsBatchMeta * batch_meta = gst_buffer_get_nvds_batch_meta (buf); 

	/* This loop iterates through each frame within a batch */  
    for (frame = batch_meta->frame_meta_list;frame != NULL; frame = frame->next) 
	{
        
		/* extract the metadata associated with the frame*/
		NvDsFrameMeta *frame_meta = (NvDsFrameMeta *) (frame->data); 
        int source_id = frame_meta->source_id; // always returns ZERO
	
		/* This loops through each detected object in the frame */
        for (obj =frame_meta->obj_meta_list;obj != NULL;obj = obj->next) 
        { 
			/* Get NVDS metadata associated with the object */
			nvds_obj_meta = (NvDsObjectMeta *) (obj->data);

			/* Get necessary metadata of the object including tracker and geometric properties like rect_params and confidence */
			...

        } 
    }
    return GST_PAD_PROBE_OK;
}

Hello again @fanzh
Just thought to check in, in case you might have an update

sorry for late reply, do you mean the bug is not fixed in deepstream 5.0.1? could you provide a whole code to reproduce this issue? Thanks.
and I tested this configuration source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8_gpu1.txt on ds6.2+xavier, the output video is correct.

@fanzh No, the DeepStream version we are running is 6.0
Is the bug also in this version?

  1. it should be fixed in DeepStream 5.0.1, please refer to the first bug in announcing-deepstream-5-0-1
  2. please make sure source-id is set in [sinkx].
  3. can you try DS6.2? or could you provide a whole simplified code and configuration file to reproduce this issue?

#1 - Thanks, went through the announcement.
#2 - On this, do you mean in the config file? Here is our config file
config_resnet.txt (3.9 KB)

Hello @fanzh

  1. this is nvinfer’s configuration file, please share the whole application’s configuration, please refer to /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8_gpu1.txt
  2. what is whole media pipeline? for example, deepstream-test1’s pipeline is "file-source → h264-parser → nvh264-decoder ->pgie → nvvidconv → nvosd → video-renderer "
  3. as I said above, I tested this configuration source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8_gpu1.txt on ds6.2+xavier, the output video is correct, can you test it?

@fanzh,

  1. We do not maintain such a configuration file. Mainly, the elements are configured as the pipeline is being built, in the pipeline code

  2. Oh, I see. Then ours would be " rtsp-sources → streammux → pgie → tracker → tiler → videoconvert → nvosd → videoconvert_2 → tee → encoder → rtppay → udp_sink "
    These are the elements in our pipeline bin, in that order
    We’ve added a dataflow source pad probe on the tracker, as mentioned.

  3. Our pipeline is instantiated differently. For one, our sources are dynamically added, so their number and order in which they’re added usually vary, that’s why I said we do not maintain such a configuration. But if you’d like to see the configuration of the elements, I can upload here the most interesting parts of the pipeline - where the main pipeline is being created, and these elements are defined and configured.

Thank you for your time and help, @fanzh

Tagging: @mdegans
On scouring the forums, I noticed he’s helped someone with a smiliar problem. From what was explained (repositioning the probe to before the tiler), my setup shouldn’t have this issue. Possibly I wasn’t clear on the resolution. I’m hoping he can weigh in.

I noticed that printing the source id in the first for loop that iterates through frames within a batch, gives the various source ids. Printing the same within the second for loop, iterating through objects in each frame, only shows the first source id, zero. Not normal behaviour I’d assume?

could you share a whole simplified code to reproduce this issue? then I can test and debug. Thanks!

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

the pipeline is similar to deepstream-test1, to narrow down the issue, you can use nvosd + fakesink to test, could you provide a simplified code base on deepstream-test1? Thanks!

Please excuse my delay in responding, me and my colleague were (and still are) caught up porting our code to C, basing it on the dstest3 example from the sample apps, minus some of our custom features like additions to tracking of detected objects. It runs without mixing up the source ids. We don’t see anything too different between the C++ version which I shared up there and the new C code. We would still like a resolution to the C++ query if it’s possible, as that is much further along in development, and many things can be streamlined there.

Would you like to see the dstest3 code, or can you locate it in the sample apps?

thie sample runtime_source_add_delete will add and delete the sources dynamically. I tested it on jetson+ ds6.2. I added a probe function on tracker, the frame_meta->source_id is not always be 0, here are the command, code diff and log:
./deepstream-test-rt-src-add-del rtsp://xxx 0 filesink 1
deepstream_test_rt_src_add_del.c (23.2 KB)
log.log (38.1 KB)
to narrow down this issue, can you reproduce that “frame_meta->source_id is 0” base on the sample runtime_source_add_delete?

This is the latest sample we adopted; I first tried it out in Python (the Python Deepstream apps have this sample as well) with good results, and so as of last week we are developing the C version of our code, basing it on this same C sample app.

Have you looked through the C++ code I shared above? In your estimation, is there something drastically different we are doing that should give the mixed results we are getting? I would compare ours with any reference apps for C++ if these were available.

the code is not special, the method of accessing source_id is correct.

Then I wonder what could cause the frames in the batch in that for loop to be folded into one source. Is it worth sharing the section of the main pipeline’s code where we’ve configured the elements - maybe something’s amiss there? And maybe we could confer with @Fiona.Chen as she’s been helpful in the past