Data do not push to database fully when multiple stream

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
GPU
• DeepStream Version
DS6.4
• JetPack Version (valid for Jetson only)
• TensorRT Version
8.6.1.6
• NVIDIA GPU Driver Version (valid for GPU only)
535.171.04
• Issue Type( questions, new requirements, bugs)
database not push fully with multiple stream
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I am currently using kafka and postgres to collect data from deepstream. Data is fine when using one stream, however, data will be missing when multiple stream is used.
This data is collected by comparing the images collected by using picture and database. The volume of data in database should be equivalent to the image capture from the stream.

which DeepStream sample are you testing or referring to? could you share the configurations of nvmsgconv and nvmsgbroker? can you use deepstream-test5 to reproduce this issue? if yes, please share the reproducing steps. Thanks!

are you talking about cpp deepstream-test5? in python code it has no test5

please use deepstream_test_4.py which support sending kafka. could you share some information mentioned in my first comment? Thanks!

In this code it only run one video at a time, there’s no problem while running one video. If multiple video some data will not be pushed

  1. Thanks for the sharing! could you share the simplified code to reproducing this issue? including the configuration files and source.
  2. how did you capture the images? are you testing detection model? does the data include the bboxes?
cv2.imwrite(image_path, lp_frame)
msg_meta = pyds.alloc_nvds_event_msg_meta(user_event_meta)
msg_meta.bbox.top = obj_meta.rect_params.top
msg_meta.bbox.left = obj_meta.rect_params.left
msg_meta.bbox.width = obj_meta.rect_params.width
msg_meta.bbox.height = obj_meta.rect_params.height
msg_meta.frameId = frame_number
msg_meta.trackingId = long_to_uint64(obj_meta.object_id)
msg_meta.confidence = obj_meta.confidence
msg_meta = generate_event_msg_meta(msg_meta, obj_meta.class_id)

user_event_meta.user_meta_data = msg_meta
user_event_meta.base_meta.meta_type = pyds.NvDsMetaType.NVDS_EVENT_MSG_META
pyds.nvds_add_user_meta_to_frame(frame_meta,
                                    user_event_meta)

I used OpenCV to write the images and use the same code as deepstream_test4 to generate the msg_meta.
Also this is the config file for two sensor video
dstest4_msgconv_config.txt (1.4 KB)

Yes the data included the bboxes of detected object

sorry for the late reply!
1 please notice this code. only the first object of every 30 frames will trigger sending broker.
2. if the above is not the root cause, please share the simplified code diff and configuration files.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.