Duplicate message in 1 batch source if nvmsgbroker is behind nvmultistreamtiler in pipeline

Hi,

Sending data sometimes gets duplicated when nvmsgbroker is behind nvmultistreamtiler in pipeline with streammux batch size larger than 1.

Temporary solution: Remove nvmultistreamtiler and set nvstreammux with batch_size=1 even when processing 4 streams, but this option leads to a decrease in FPS.

• Hardware Platform (Jetson / GPU)
Jetson Nano, Jetson Xavier NX, x86
• DeepStream Version
5.0
• JetPack Version (valid for Jetson only)
4.4
• TensorRT Version
7.1.3-1+cuda10.2
• Issue Type( questions, new requirements, bugs)
Bugs
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

This error pipeline

gst-launch-1.0 -e nvstreammux name=m batched-push-timeout=40000 batch-size=4 width=1920 height=1080 ! nvinfer config-file-path=/data/configs/pgie_arm.txt ! yoloparser ! nvdsanalytics config-file=/data/configs/analytics.txt ! nvmultistreamtiler rows=2 columns=2 width=1280 height=720 ! queue ! nvmsgconv config=/data/configs/msgconv.txt payload-type=257 ! nvmsgbroker proto-lib=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_kafka_proto.so conn-str=$BOOTSTRAP_SERVER topic=$TOPIC config=/data/configs/kafka.txt sync=false

I will check

1 Like

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

can you use deepstream-test4 or deepstream-test5 to reproduce this issue? could you share some duplicated data?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.