In the deepstream_parallel_inference_app application, the place id in all data received by Kafka is 0

[Problem Description]

Run deepstream_parallel_inference_app in the container environment (nvidia/deepstream:6.4-gc-triton-devel) (supporting deepstream6.4)。
2 RTSP sources + 2 model inferences, and configured with lot location, sensor and other data. The place ID corresponding to the first source is 0, and the place ID corresponding to the second source should be 1. Both sources have detection outputs. The expected result should be that Kafka can receive the place IDs of 0 and 1. However, currently it can only receive the place ID of 0.

The following are the modifications to the project code:

[Run configuration]

application:
  enable-perf-measurement: 1
  perf-measurement-interval-sec: 5

source:
  csv-file-path: sources.csv

streammux:
  batch-size: 2
  batched-push-timeout: 40000
  buffer-pool-size: 2
  enable-padding: 0
  gpu-id: 0
  width: 1920
  height: 1080
  live-source: 1
  nvbuf-memory-type: 0
  config-file: config_streammux.txt
  async-process: 1
  frame-duration: 10
  sync-inputs: 1
  max-latency: 10000000

primary-gie0:
  enable: 1
  gie-unique-id: 1
  batch-size: 1
  config-file: config_infer_primary_helmet_yoloV5.txt
  plugin-type: 0
  gpu-id: 0
  nvbuf-memory-type: 0
  bbox-border-color0: 1;0;0;1
  bbox-border-color1: 0;1;1;1
  bbox-border-color2: 0;0;1;1
  bbox-border-color3: 0;1;0;1
        
branch0:
  pgie-id: 1
  src-ids: 0;1

primary-gie1:
  enable: 1
  gie-unique-id: 2
  batch-size: 1
  config-file: config_infer_primary_yoloV8.txt
  plugin-type: 0
  gpu-id: 0
  nvbuf-memory-type: 0
  bbox-border-color0: 1;0;0;1
  bbox-border-color1: 0;1;1;1
  bbox-border-color2: 0;0;1;1
  bbox-border-color3: 0;1;0;1
  
branch1:
  pgie-id: 2
  src-ids: 0;1

meta-mux:
  config-file: config_metamux.txt
  enable: 1

tiled-display:
  enable: 1
  columns: 2
  rows: 2
  gpu-id: 0
  height: 1080
  width: 1920
  nvbuf-memory-type: 0

osd:
  enable: 1
  process-mode: 1
  gpu-id: 0
  nvbuf-memory-type: 0
  border-width: 1
  font: Serif
  text-bg-color: 0.3;0.3;0.3;1
  text-color: 1;1;1;1
  text-size: 15
  show-clock: 1
  clock-color: 1;1;1;1
  clock-text-size: 12
  clock-x-offset: 800
  clock-y-offset: 1

sink0:
  enable: 1
  type: 1
  sync: 0
  gpu-id: 0
  nvbuf-memory-type: 0
  source-id: 0
  
sink1:
  enable: 0
  type: 2
  sync: 0
  gpu-id: 0
  nvbuf-memory-type: 0
  
sink2:    
  enable: 1
  type: 6
  #sync: 0
  #disable-msgconv: 0
  msg-conv-config: msgconv_config.yml
  msg-conv-payload-type: 0
  #msg-conv-msg2p-lib: /opt/nvidia/deepstream/deepstream/lib/libnvds_msgconv.so
  msg-conv-msg2p-new-api: 1
  iframeinterval: 25
  multiple-payloads: 1
  new-api: 0
  topic: dstest
  msg-broker-conn-str: 192.168.1.238;9092
  msg-broker-proto-lib: /opt/nvidia/deepstream/deepstream/lib/libnvds_kafka_proto.so
  
sink3:
  enable: 1
  type: 4
  sync: 0
  bitrate: 8000000
  codec: 1
  enc-type: 0
  profile: 0
  rtsp-port: 8554
  udp-port: 5400
  
tests:
  file-loop: 0

[Pipeline diagram]

[Operation results]

  1. Probe log

  2. Kafka record

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)

• DeepStream Version

• JetPack Version (valid for Jetson only)

• TensorRT Version

• NVIDIA GPU Driver Version (valid for GPU only)

• Issue Type( questions, new requirements, bugs)

• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

1 Like

±--------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.183.01 Driver Version: 535.183.01 CUDA Version: 12.2 |
|-----------------------------------------±---------------------±---------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA GeForce RTX 3090 Off | 00000000:01:00.0 Off | N/A |
| 30% 47C P8 24W / 350W | 26MiB / 24576MiB | 0% Default |
| | | N/A |
±----------------------------------------±---------------------±---------------------+

±--------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
±--------------------------------------------------------------------------------------+

deepstream-app version 6.4.0
DeepStreamSDK 6.4.0
CUDA Driver Version: 12.2
CUDA Runtime Version: 12.2
TensorRT Version: 8.6
cuDNN Version: 8.9
libNVWarp360 Version: 2.0.1d3

I’m not sure if it’s a bug or a configuration issue.

nvmultistreamtiler plugin will composite all frames in batch. after tiler, there will only one frame in the batch. so the source_id is 0.

So in the deepstream_parallel_inference_app application, how can we configure the broker sink to skip the tiler and forward data without modifying the existing application code?

please read this topic from Feb 17 '23.

After multiple tests, it was found that there was a problem with using msg_conv in the sink configuration group and the results could not be output normally. It was necessary to enable common_msg_conv separately and use object metadata to generate a payload (msg-conv-msg2p-new-api: 1) to ensure the correct data output.