Deepstream parallel pipeline video sink issue

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson AGX Orin
• DeepStream Version 6.2
• JetPack Version (valid for Jetson only) 5.1
• TensorRT Version 8.5.2.2
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I am currently working on a parallel deepstream pipeline.
I refer to the following repo, https://github.com/NVIDIA-AI-IOT/deepstream_parallel_inference_app

My pipeline is like the picture below.

When I save the output video, let’s say output_1 and output_2, the issue rises.
The bounding box of the Yolov4 appears in the second video, output_2 with the bbox of Bodypose2D.

Is it not possible to encode two separate videos in the same pipeline?

The configurations decide which model’s output to which stream. Can you post all your configurations?

Thanks for the reply Fiona.
Here are my config files.

[PGIE 0]

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-file=../model/Primary_Detector/resnet10.caffemodel
proto-file=../model/Primary_Detector/resnet10.prototxt
model-engine-file=../model/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
labelfile-path=../model/Primary_Detector/labels.txt
int8-calib-file=../model/Primary_Detector/cal_trt.bin
force-implicit-batch-dim=1
batch-size=1
network-mode=1
num-detected-classes=4
interval=0
gie-unique-id=1
output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid
#scaling-filter=0
#scaling-compute-hw=0

[class-attrs-all]
pre-cluster-threshold=0.2
eps=0.2
group-threshold=1

[PGIE 1]

[property]
gpu-id=0
net-scale-factor=1.0
model-color-format=0
model-engine-file=../model/unetres18_v4_pruned0.65_800_data.uff_b1_gpu0_fp16.engine
uff-file=../model/unetres18_v4_pruned0.65_800_data.uff
infer-dims=3;512;512
uff-input-order=0
uff-input-blob-name=data
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
num-detected-classes=4
interval=0
gie-unique-id=2
  
network-type=2
  
  
output-blob-names=final_conv/BiasAdd
segmentation-threshold=0.0
batch-size=1
  
[class-attrs-all]
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0

The model configurations are not for the pipeline control. Please post all app configurations, E.G. the configurations like deepstream_parallel_inference_app/tritonclient/sample/configs/apps/bodypose_yolo at master · NVIDIA-AI-IOT/deepstream_parallel_inference_app (github.com)

Well I made the pipeline with the code like the following link.
https://gist.github.com/crearo/a49a8805857f1237c401be14ba6d3b03

So the config files I use are those two.

Does nvstreamdemux have to attach in the pipeline?

Yes. The nvstreamdemux should be in the pipeline and you need to implement the function of removing the sink branch when the corresponding source is removed.

Okay, I’ll try and share the result :) Thanks a lot!

I got one more related issue, well, it could be a simple question about queue.
Before adding streamdemux-streammux after tee(between tee-pgie), i added queue here just like the pic. below.


and the overlay issue(output of pgie_0 also shows on the output of pgie_1) does not happen on every frame.
So what are the reason for this? Just adding queue made the branch more independent.

queue will not change any inferencing logic. queue (gstreamer.freedesktop.org)

Please make sure you have not change any code of GitHub - NVIDIA-AI-IOT/deepstream_parallel_inference_app: A project demonstrating how to use nvmetamux to run multiple models in parallel.. Please share your configurations of the app.

Okay I’ll try and share the result
Thanks :)

Hi @young2theMax ,
Do you still need support for this topic? Or should we close it? Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.