Good day!
I use python to prototype an application that should detect objects.
So in my desired pipeline are videosrc, nvinfer, nvtracker, osd, and videosink
on display. Also, I want to implement a feature where user could set bbox for tracker bypassing inference element. I used NvDCF for tracking as it is able to do visual tracking and filtering inference results. Also, I used gstreamer probe on the tracker sink buffer to pass user input there.
The problem is - once I pass user input bbox it exists only a few frames more and then disappears. If I launch pipeline without inference - all works fine: the tracker can track objects without any issues. But I need to insert inference too). I was tried to use tee element and create two branches: one where is inference, tracker osd, and sink, and another where is only user tracker and fakesink. (I was going to share only bbox info with osd through probe callback on osd element.) It looks weird but it seems there is same buffer and inference still may influence usere tracker and reseted it continuously.
So the question is - how could I implement user input tracker with inference simultaneously?
**• Jetson Xavier AGX
**• DeepStream Version 6.3
**• JetPack Version 35.4.1-20230801124926
**• TensorRT Version 8.5.2-1+cuda11.4
**• questions
**• reproduce: gst-launch-1.0 v4lsrc ! nvvideoconvert ! nvinfer ! nvtracker ! nvdsosd ! nv3dsink
**• the probe callback that sets-up tracker:
user_input_object = pyds.nvds_acquire_obj_meta_from_pool(batch_meta)
user_input_display = pyds.nvds_acquire_display_meta_from_pool(batch_meta)
user_input_object.class_id = 88 # make sure it not interfere with inference classes
user_input_object.confidence = 0.9999999 # we trust our user
user_input_object.obj_label = "user_input"
user_input_object.object_id = 0xFFFFFFFFFFFFFFFF # UNTRACKED_OBJECT_ID
user_input_object.tracker_confidence = 0.9999 # same for tracker
user_input_object.unique_component_id = 777 # the user input might be only once per scene
user_input_object.rect_params.bg_color.set(0.25, 1.0, 0.1, 0.5) # a b g r
user_input_object.rect_params.border_color.set(1.0, 1.0, 0.0, 0.0)
user_input_object.rect_params.border_width = 4
user_input_object.rect_params.color_id = 0
user_input_object.rect_params.has_bg_color = 1
user_input_object.rect_params.height = h
user_input_object.rect_params.left = l
user_input_object.rect_params.top = t
user_input_object.rect_params.width = w
#print("ds set ui_object rect_params to " + str(h) + " " + str(l) + " " + str(t) + " " + str(w))
user_input_object.text_params.display_text = "user_input"
user_input_object.text_params.x_offset = l + 5
user_input_object.text_params.y_offset = t + 5
user_input_object.detector_bbox_info.org_bbox_coords.left = l
user_input_object.detector_bbox_info.org_bbox_coords.top = t
user_input_object.detector_bbox_info.org_bbox_coords.width = w
user_input_object.detector_bbox_info.org_bbox_coords.height = h
user_input_object.tracker_bbox_info.org_bbox_coords.left = l
user_input_object.tracker_bbox_info.org_bbox_coords.top = t
user_input_object.tracker_bbox_info.org_bbox_coords.width = w
user_input_object.tracker_bbox_info.org_bbox_coords.height = h
pyds.nvds_add_obj_meta_to_frame(frame_meta, user_input_object, None)
frame_meta.bInferDone = 1
After some time I partially solved this problem: I added tee element and connected the same videoinput to nvstreammux twice, then added the tracker in serial one by another. After that, on osd screen, I had got blinking osd with tracked object by user and by inference together. But blinking image is annoying… Is there a “normal” way to made this problem?
Thanks in advance!