How to link nvvidconv to two elements

My Deepstream 6.0 pipeline is the one attached as a pdf. In brief, I am running object detection, tracker, and classification models on 2 local video files. After that, for each stream, I’d like to:

  • Save numpy image of the detected objects → I got this working
  • Output a live RTSP video stream with the bounding boxes → I can’t get this working

The issue
Looking at the attached pdf, it seems that the part of the pipeline that should output the RTSP stream is not attached to the rest of the pipeline. In fact, you’ll see two floating pieces in the the attached pdf (one at the top of the pdf, and one at the bottom). They are 2 because 2 are the input streams.
To fix this, I would need to connect:

  • Gstnvvideoconvert convertor_0 to GstNvDsOsd onscreendisplay_0
  • Gstnvvideoconvert convertor_1 to GstNvDsOsd onscreendisplay_1

Python code
The above links are missing because in Python, when trying to link them using convertor_0.link(onscreendisplay_0) and convertor_0.link(onscreendisplay_0) both linking operations return False. I don’t know why, but I am supposing this is because convertor_0 is already linked to filter_numpy_frame_0 and convertor_1is already linked tofilter_numpy_frame_1`. Is this the reason? What architecture should I use to do this properly?

Output
The following is the output of the code run with GST_DEBUG_3:

Camera 0 file:///opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_qHD.h264
Camera 1 file:///opt/nvidia/deepstream/deepstream-6.0/samples/streams/sample_qHD.h264
Number of streams 2
//src/pipeline/main.py:60: PyGIDeprecationWarning: Since version 3.11, calling threads_init is no longer needed. See: https://wiki.gnome.org/PyGObject/Threading
  GObject.threads_init()
Creating Pipeline 
Creating nvstreammux 
Creating Pgie 
 
//src/pipeline/main.py:208: PyGIDeprecationWarning: GObject.MainLoop is deprecated; use GLib.MainLoop instead
  loop = GObject.MainLoop()
Adding camera stream0
Creating source_bin  0  
 
Creating source bin
source-bin-00
demux source 0 
Added camera 0
Creating H264 Encoder
Creating H264 rtppay
udpsinkport 5400
 *** DeepStream: Launched RTSP Streaming at rtsp://localhost:8554/ds-test0 ***
Adding camera stream1
Creating source_bin  1  
 
Creating source bin
source-bin-01
demux source 1 
Added camera 1
Creating H264 Encoder
Creating H264 rtppay
udpsinkport 5401
 *** DeepStream: Launched RTSP Streaming at rtsp://localhost:8554/ds-test1 ***
Starting pipeline 
0:00:00.117659631  1085      0x2d56870 WARN                    v4l2 gstv4l2object.c:3051:gst_v4l2_object_get_nearest_size:<encoder_1:sink> Unable to try format: Unknown error -1
0:00:00.117701888  1085      0x2d56870 WARN                    v4l2 gstv4l2object.c:2937:gst_v4l2_object_probe_caps_for_format:<encoder_1:sink> Could not probe minimum capture size for pixelformat YM12
0:00:00.117717169  1085      0x2d56870 WARN                    v4l2 gstv4l2object.c:3051:gst_v4l2_object_get_nearest_size:<encoder_1:sink> Unable to try format: Unknown error -1
0:00:00.117728171  1085      0x2d56870 WARN                    v4l2 gstv4l2object.c:2943:gst_v4l2_object_probe_caps_for_format:<encoder_1:sink> Could not probe maximum capture size for pixelformat YM12
0:00:00.117745751  1085      0x2d56870 WARN                    v4l2 gstv4l2object.c:2388:gst_v4l2_object_add_interlace_mode:0x2d50db0 Failed to determine interlace mode
0:00:00.117769393  1085      0x2d56870 WARN                    v4l2 gstv4l2object.c:3051:gst_v4l2_object_get_nearest_size:<encoder_1:sink> Unable to try format: Unknown error -1
0:00:00.117783904  1085      0x2d56870 WARN                    v4l2 gstv4l2object.c:2937:gst_v4l2_object_probe_caps_for_format:<encoder_1:sink> Could not probe minimum capture size for pixelformat NM12
0:00:00.117796424  1085      0x2d56870 WARN                    v4l2 gstv4l2object.c:3051:gst_v4l2_object_get_nearest_size:<encoder_1:sink> Unable to try format: Unknown error -1
0:00:00.117810424  1085      0x2d56870 WARN                    v4l2 gstv4l2object.c:2943:gst_v4l2_object_probe_caps_for_format:<encoder_1:sink> Could not probe maximum capture size for pixelformat NM12
0:00:00.117823587  1085      0x2d56870 WARN                    v4l2 gstv4l2object.c:2388:gst_v4l2_object_add_interlace_mode:0x2d50db0 Failed to determine interlace mode
0:00:00.117878853  1085      0x2d56870 WARN                    v4l2 gstv4l2object.c:3051:gst_v4l2_object_get_nearest_size:<encoder_1:src> Unable to try format: Unknown error -1
0:00:00.117892619  1085      0x2d56870 WARN                    v4l2 gstv4l2object.c:2937:gst_v4l2_object_probe_caps_for_format:<encoder_1:src> Could not probe minimum capture size for pixelformat H264
0:00:00.117905037  1085      0x2d56870 WARN                    v4l2 gstv4l2object.c:3051:gst_v4l2_object_get_nearest_size:<encoder_1:src> Unable to try format: Unknown error -1
0:00:00.117917239  1085      0x2d56870 WARN                    v4l2 gstv4l2object.c:2943:gst_v4l2_object_probe_caps_for_format:<encoder_1:src> Could not probe maximum capture size for pixelformat H264
0:00:00.118059724  1085      0x2d56870 WARN                    v4l2 gstv4l2object.c:3051:gst_v4l2_object_get_nearest_size:<encoder_0:sink> Unable to try format: Unknown error -1
0:00:00.118073095  1085      0x2d56870 WARN                    v4l2 gstv4l2object.c:2937:gst_v4l2_object_probe_caps_for_format:<encoder_0:sink> Could not probe minimum capture size for pixelformat YM12
0:00:00.118083664  1085      0x2d56870 WARN                    v4l2 gstv4l2object.c:3051:gst_v4l2_object_get_nearest_size:<encoder_0:sink> Unable to try format: Unknown error -1
0:00:00.118092857  1085      0x2d56870 WARN                    v4l2 gstv4l2object.c:2943:gst_v4l2_object_probe_caps_for_format:<encoder_0:sink> Could not probe maximum capture size for pixelformat YM12
0:00:00.118102628  1085      0x2d56870 WARN                    v4l2 gstv4l2object.c:2388:gst_v4l2_object_add_interlace_mode:0x2c7ac30 Failed to determine interlace mode
0:00:00.118133174  1085      0x2d56870 WARN                    v4l2 gstv4l2object.c:3051:gst_v4l2_object_get_nearest_size:<encoder_0:sink> Unable to try format: Unknown error -1
0:00:00.118144191  1085      0x2d56870 WARN                    v4l2 gstv4l2object.c:2937:gst_v4l2_object_probe_caps_for_format:<encoder_0:sink> Could not probe minimum capture size for pixelformat NM12
0:00:00.118153099  1085      0x2d56870 WARN                    v4l2 gstv4l2object.c:3051:gst_v4l2_object_get_nearest_size:<encoder_0:sink> Unable to try format: Unknown error -1
0:00:00.118162287  1085      0x2d56870 WARN                    v4l2 gstv4l2object.c:2943:gst_v4l2_object_probe_caps_for_format:<encoder_0:sink> Could not probe maximum capture size for pixelformat NM12
0:00:00.118171776  1085      0x2d56870 WARN                    v4l2 gstv4l2object.c:2388:gst_v4l2_object_add_interlace_mode:0x2c7ac30 Failed to determine interlace mode
0:00:00.118206121  1085      0x2d56870 WARN                    v4l2 gstv4l2object.c:3051:gst_v4l2_object_get_nearest_size:<encoder_0:src> Unable to try format: Unknown error -1
0:00:00.118215695  1085      0x2d56870 WARN                    v4l2 gstv4l2object.c:2937:gst_v4l2_object_probe_caps_for_format:<encoder_0:src> Could not probe minimum capture size for pixelformat H264
0:00:00.118224258  1085      0x2d56870 WARN                    v4l2 gstv4l2object.c:3051:gst_v4l2_object_get_nearest_size:<encoder_0:src> Unable to try format: Unknown error -1
0:00:00.118233238  1085      0x2d56870 WARN                    v4l2 gstv4l2object.c:2943:gst_v4l2_object_probe_caps_for_format:<encoder_0:src> Could not probe maximum capture size for pixelformat H264
WARNING: infer_proto_utils.cpp:201 backend.trt_is is deprecated. updated it to backend.triton
0:00:00.293676665  1085      0x2d56870 WARN           nvinferserver gstnvinferserver_impl.cpp:352:validatePluginConfig:<vehicles-nvinference-engine> warning: NvInferServer asynchronous mode is applicable for secondaryclassifiers only. Turning off asynchronous mode
I0110 19:29:05.099426 1085 metrics.cc:290] Collecting metrics for GPU 0: Tesla T4
I0110 19:29:05.357249 1085 libtorch.cc:1029] TRITONBACKEND_Initialize: pytorch
I0110 19:29:05.357282 1085 libtorch.cc:1039] Triton TRITONBACKEND API version: 1.4
I0110 19:29:05.357293 1085 libtorch.cc:1045] 'pytorch' TRITONBACKEND API version: 1.4
2022-01-10 19:29:05.464397: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
I0110 19:29:05.507777 1085 tensorflow.cc:2169] TRITONBACKEND_Initialize: tensorflow
I0110 19:29:05.507828 1085 tensorflow.cc:2179] Triton TRITONBACKEND API version: 1.4
I0110 19:29:05.507838 1085 tensorflow.cc:2185] 'tensorflow' TRITONBACKEND API version: 1.4
I0110 19:29:05.507845 1085 tensorflow.cc:2209] backend configuration:
{"cmdline":{"allow-soft-placement":"true","gpu-memory-fraction":"0.000000"}}
I0110 19:29:05.510502 1085 onnxruntime.cc:1970] TRITONBACKEND_Initialize: onnxruntime
I0110 19:29:05.510529 1085 onnxruntime.cc:1980] Triton TRITONBACKEND API version: 1.4
I0110 19:29:05.510538 1085 onnxruntime.cc:1986] 'onnxruntime' TRITONBACKEND API version: 1.4
I0110 19:29:05.531238 1085 openvino.cc:1193] TRITONBACKEND_Initialize: openvino
I0110 19:29:05.531270 1085 openvino.cc:1203] Triton TRITONBACKEND API version: 1.4
I0110 19:29:05.531279 1085 openvino.cc:1209] 'openvino' TRITONBACKEND API version: 1.4
I0110 19:29:05.640906 1085 pinned_memory_manager.cc:240] Pinned memory pool is created at '0x7fd42c000000' with size 268435456
I0110 19:29:05.641260 1085 cuda_memory_manager.cc:105] CUDA memory pool is created on device 0 with size 67108864
I0110 19:29:05.641913 1085 server.cc:504] 
+------------------+------+
| Repository Agent | Path |
+------------------+------+
+------------------+------+
I0110 19:29:05.641975 1085 server.cc:543] 
+-------------+-------------------------------+-------------------------------+
| Backend     | Path                          | Config                        |
+-------------+-------------------------------+-------------------------------+
| tensorrt    | <built-in>                    | {}                            |
| pytorch     | /opt/tritonserver/backends/py | {}                            |
|             | torch/libtriton_pytorch.so    |                               |
| tensorflow  | /opt/tritonserver/backends/te | {"cmdline":{"allow-soft-place |
|             | nsorflow1/libtriton_tensorflo | ment":"true","gpu-memory-frac |
|             | w1.so                         | tion":"0.000000"}}            |
| onnxruntime | /opt/tritonserver/backends/on | {}                            |
|             | nxruntime/libtriton_onnxrunti |                               |
|             | me.so                         |                               |
| openvino    | /opt/tritonserver/backends/op | {}                            |
|             | envino/libtriton_openvino.so  |                               |
+-------------+-------------------------------+-------------------------------+
I0110 19:29:05.642008 1085 server.cc:586] 
+-------+---------+--------+
| Model | Version | Status |
+-------+---------+--------+
+-------+---------+--------+
I0110 19:29:05.642082 1085 tritonserver.cc:1718] 
+----------------------------------+------------------------------------------+
| Option                           | Value                                    |
+----------------------------------+------------------------------------------+
| server_id                        | triton                                   |
| server_version                   | 2.13.0                                   |
| server_extensions                | classification sequence model_repository |
|                                  |  model_repository(unload_dependents) sch |
|                                  | edule_policy model_configuration system_ |
|                                  | shared_memory cuda_shared_memory binary_ |
|                                  | tensor_data statistics                   |
| model_repository_path[0]         | /src/pipeline/models/repository          |
| model_control_mode               | MODE_EXPLICIT                            |
| strict_model_config              | 0                                        |
| pinned_memory_pool_byte_size     | 268435456                                |
| cuda_memory_pool_byte_size{0}    | 67108864                                 |
| min_supported_compute_capability | 6.0                                      |
| strict_readiness                 | 1                                        |
| exit_timeout                     | 30                                       |
+----------------------------------+------------------------------------------+
I0110 19:29:05.643388 1085 model_repository_manager.cc:1045] loading: vehicles:1
I0110 19:29:05.743639 1085 libtorch.cc:1078] TRITONBACKEND_ModelInitialize: vehicles (version 1)
W0110 19:29:05.744607 1085 libtorch.cc:192] skipping model configuration auto-complete for 'vehicles': not supported for pytorch backend
I0110 19:29:05.744996 1085 libtorch.cc:219] Optimized execution is enabled
I0110 19:29:05.745012 1085 libtorch.cc:236] Inference Mode is disabled
I0110 19:29:05.746145 1085 libtorch.cc:1119] TRITONBACKEND_ModelInstanceInitialize: vehicles_0 (device 0)
TOT FPS 0.0   AVG FPS 0   STREAMS UP 0
I0110 19:29:10.375981 1085 model_repository_manager.cc:1212] successfully loaded 'vehicles' version 1
INFO: infer_trtis_backend.cpp:206 TrtISBackend id:11 initialized model: vehicles
WARNING: infer_proto_utils.cpp:201 backend.trt_is is deprecated. updated it to backend.triton
0:00:07.651271070  1085      0x2d56870 WARN           nvinferserver gstnvinferserver_impl.cpp:352:validatePluginConfig:<people-nvinference-engine> warning: NvInferServer asynchronous mode is applicable for secondaryclassifiers only. Turning off asynchronous mode
I0110 19:29:12.413109 1085 model_repository_manager.cc:1045] loading: people:1
I0110 19:29:12.513398 1085 libtorch.cc:1078] TRITONBACKEND_ModelInitialize: people (version 1)
W0110 19:29:12.513791 1085 libtorch.cc:192] skipping model configuration auto-complete for 'people': not supported for pytorch backend
I0110 19:29:12.514162 1085 libtorch.cc:219] Optimized execution is enabled
I0110 19:29:12.514175 1085 libtorch.cc:236] Inference Mode is disabled
I0110 19:29:12.515307 1085 libtorch.cc:1119] TRITONBACKEND_ModelInstanceInitialize: people_0 (device 0)
I0110 19:29:13.227048 1085 model_repository_manager.cc:1212] successfully loaded 'people' version 1
INFO: infer_trtis_backend.cpp:206 TrtISBackend id:10 initialized model: people
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is ON
[NvMultiObjectTracker] Initialized
0:00:08.896507015  1085      0x2d56870 WARN           nvinferserver gstnvinferserver_impl.cpp:284:validatePluginConfig:<primary-inference> warning: Configuration file batch-size reset to: 16
WARNING: infer_proto_utils.cpp:201 backend.trt_is is deprecated. updated it to backend.triton
I0110 19:29:14.120334 1085 logging.cc:49] [MemUsageChange] Init CUDA: CPU +318, GPU +0, now: CPU 3262, GPU 2594 (MiB)
I0110 19:29:14.121388 1085 logging.cc:49] Loaded engine size: 0 MB
I0110 19:29:14.121637 1085 logging.cc:49] [MemUsageSnapshot] deserializeCudaEngine begin: CPU 3262 MiB, GPU 2594 MiB
E0110 19:29:14.122063 1085 logging.cc:43] 1: [stdArchiveReader.cpp::StdArchiveReader::29] Error Code 1: Serialization (Serialization assertion magicTagRead == magicTag failed.Magic tag does not match)
E0110 19:29:14.122092 1085 logging.cc:43] 4: [runtime.cpp::deserializeCudaEngine::75] Error Code 4: Internal Error (Engine deserialization failed.)
I0110 19:29:14.153192 1085 logging.cc:49] [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 3305, GPU 2594 (MiB)
I0110 19:29:14.153224 1085 logging.cc:49] Loaded engine size: 21 MB
I0110 19:29:14.153315 1085 logging.cc:49] [MemUsageSnapshot] deserializeCudaEngine begin: CPU 3305 MiB, GPU 2594 MiB
I0110 19:29:14.325136 1085 logging.cc:49] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +8, now: CPU 3363, GPU 2642 (MiB)
I0110 19:29:14.326053 1085 logging.cc:49] [MemUsageChange] Init cuDNN: CPU +0, GPU +10, now: CPU 3363, GPU 2652 (MiB)
I0110 19:29:14.327115 1085 logging.cc:49] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 3363, GPU 2636 (MiB)
I0110 19:29:14.327248 1085 logging.cc:49] [MemUsageSnapshot] deserializeCudaEngine end: CPU 3363 MiB, GPU 2636 MiB
I0110 19:29:14.327640 1085 model_repository_manager.cc:1045] loading: yolov5_tensorrt:1
I0110 19:29:14.459583 1085 logging.cc:49] [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 3341, GPU 2636 (MiB)
I0110 19:29:14.459611 1085 logging.cc:49] Loaded engine size: 21 MB
I0110 19:29:14.459716 1085 logging.cc:49] [MemUsageSnapshot] deserializeCudaEngine begin: CPU 3341 MiB, GPU 2636 MiB
I0110 19:29:14.558478 1085 logging.cc:49] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +8, now: CPU 3347, GPU 2662 (MiB)
I0110 19:29:14.559316 1085 logging.cc:49] [MemUsageChange] Init cuDNN: CPU +0, GPU +8, now: CPU 3347, GPU 2670 (MiB)
I0110 19:29:14.560304 1085 logging.cc:49] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 3347, GPU 2654 (MiB)
I0110 19:29:14.560430 1085 logging.cc:49] [MemUsageSnapshot] deserializeCudaEngine end: CPU 3347 MiB, GPU 2654 MiB
I0110 19:29:14.560447 1085 plan_backend.cc:456] Creating instance yolov5_tensorrt_0_0_gpu0 on GPU 0 (7.5) using model.pt
I0110 19:29:14.562135 1085 logging.cc:49] [MemUsageSnapshot] ExecutionContext creation begin: CPU 3347 MiB, GPU 2654 MiB
I0110 19:29:14.562831 1085 logging.cc:49] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +8, now: CPU 3347, GPU 2662 (MiB)
I0110 19:29:14.563518 1085 logging.cc:49] [MemUsageChange] Init cuDNN: CPU +0, GPU +8, now: CPU 3347, GPU 2670 (MiB)
I0110 19:29:14.564753 1085 logging.cc:49] [MemUsageSnapshot] ExecutionContext creation end: CPU 3347 MiB, GPU 2946 MiB
I0110 19:29:14.565084 1085 plan_backend.cc:863] Created instance yolov5_tensorrt_0_0_gpu0 on GPU 0 with stream priority 0
I0110 19:29:14.565176 1085 model_repository_manager.cc:1212] successfully loaded 'yolov5_tensorrt' version 1
INFO: infer_trtis_backend.cpp:206 TrtISBackend id:1 initialized model: yolov5_tensorrt
0:00:09.833023523  1085      0x2d56870 WARN                 basesrc gstbasesrc.c:3600:gst_base_src_start_complete:<source> pad not activated yet
Decodebin child added: source 
Decodebin child added: decodebin0 
0:00:09.834016320  1085      0x2d56870 WARN                 basesrc gstbasesrc.c:3600:gst_base_src_start_complete:<source> pad not activated yet
0:00:09.834346893  1085      0x2d56870 WARN                 basesrc gstbasesrc.c:3600:gst_base_src_start_complete:<source> pad not activated yet
Decodebin child added: source 
Decodebin child added: decodebin1 
0:00:09.834582566  1085      0x2d56870 WARN                 basesrc gstbasesrc.c:3600:gst_base_src_start_complete:<source> pad not activated yet
Warning: gst-library-error-quark: NvInferServer asynchronous mode is applicable for secondaryclassifiers only. Turning off asynchronous mode (5): gstnvinferserver_impl.cpp(352): validatePluginConfig (): /GstPipeline:pipeline0/GstNvInferServer:vehicles-nvinference-engine
Warning: gst-library-error-quark: NvInferServer asynchronous mode is applicable for secondaryclassifiers only. Turning off asynchronous mode (5): gstnvinferserver_impl.cpp(352): validatePluginConfig (): /GstPipeline:pipeline0/GstNvInferServer:people-nvinference-engine
Warning: gst-library-error-quark: Configuration file batch-size reset to: 16 (5): gstnvinferserver_impl.cpp(284): validatePluginConfig (): /GstPipeline:pipeline0/GstNvInferServer:primary-inference
Decodebin child added:Decodebin child added:  h264parse0h264parse1  
Decodebin child added:Decodebin child added:  capsfilter1capsfilter0  
Decodebin child added:Decodebin child added:  nvv4l2decoder0nvv4l2decoder1 
 
0:00:09.882910213  1085 0x7fd2100951e0 WARN                    v4l2 gstv4l2object.c:3051:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:09.882926512  1085 0x7fd2100951e0 WARN                    v4l2 gstv4l2object.c:2937:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat MJPG
0:00:09.882941833  1085 0x7fd2100951e0 WARN                    v4l2 gstv4l2object.c:3051:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:09.882949587  1085 0x7fd2100951e0 WARN                    v4l2 gstv4l2object.c:2943:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat MJPG
0:00:09.882968803  1085 0x7fd2100951e0 WARN                    v4l2 gstv4l2object.c:3051:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:09.882970443  1085     0x25fbe360 WARN                    v4l2 gstv4l2object.c:3051:gst_v4l2_object_get_nearest_size:<nvv4l2decoder1:sink> Unable to try format: Unknown error -1
0:00:09.882978453  1085 0x7fd2100951e0 WARN                    v4l2 gstv4l2object.c:2937:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat MPG4
0:00:09.882994146  1085     0x25fbe360 WARN                    v4l2 gstv4l2object.c:2937:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder1:sink> Could not probe minimum capture size for pixelformat MJPG
0:00:09.883001291  1085 0x7fd2100951e0 WARN                    v4l2 gstv4l2object.c:3051:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:09.883012431  1085     0x25fbe360 WARN                    v4l2 gstv4l2object.c:3051:gst_v4l2_object_get_nearest_size:<nvv4l2decoder1:sink> Unable to try format: Unknown error -1
0:00:09.883024196  1085 0x7fd2100951e0 WARN                    v4l2 gstv4l2object.c:2943:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat MPG4
0:00:09.883038180  1085     0x25fbe360 WARN                    v4l2 gstv4l2object.c:2943:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder1:sink> Could not probe maximum capture size for pixelformat MJPG
0:00:09.883053985  1085 0x7fd2100951e0 WARN                    v4l2 gstv4l2object.c:3051:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:09.883063916  1085 0x7fd2100951e0 WARN                    v4l2 gstv4l2object.c:2937:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat MPG2
0:00:09.883065540  1085     0x25fbe360 WARN                    v4l2 gstv4l2object.c:3051:gst_v4l2_object_get_nearest_size:<nvv4l2decoder1:sink> Unable to try format: Unknown error -1
0:00:09.883074960  1085 0x7fd2100951e0 WARN                    v4l2 gstv4l2object.c:3051:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:09.883088698  1085     0x25fbe360 WARN                    v4l2 gstv4l2object.c:2937:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder1:sink> Could not probe minimum capture size for pixelformat MPG4
0:00:09.883099192  1085 0x7fd2100951e0 WARN                    v4l2 gstv4l2object.c:2943:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat MPG2
0:00:09.883110372  1085     0x25fbe360 WARN                    v4l2 gstv4l2object.c:3051:gst_v4l2_object_get_nearest_size:<nvv4l2decoder1:sink> Unable to try format: Unknown error -1
0:00:09.883132552  1085 0x7fd2100951e0 WARN                    v4l2 gstv4l2object.c:3051:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:09.883133919  1085     0x25fbe360 WARN                    v4l2 gstv4l2object.c:2943:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder1:sink> Could not probe maximum capture size for pixelformat MPG4
0:00:09.883145912  1085 0x7fd2100951e0 WARN                    v4l2 gstv4l2object.c:2937:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat H265
0:00:09.883164758  1085     0x25fbe360 WARN                    v4l2 gstv4l2object.c:3051:gst_v4l2_object_get_nearest_size:<nvv4l2decoder1:sink> Unable to try format: Unknown error -1
0:00:09.883166291  1085 0x7fd2100951e0 WARN                    v4l2 gstv4l2object.c:3051:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:09.883178406  1085     0x25fbe360 WARN                    v4l2 gstv4l2object.c:2937:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder1:sink> Could not probe minimum capture size for pixelformat MPG2
0:00:09.883190215  1085 0x7fd2100951e0 WARN                    v4l2 gstv4l2object.c:2943:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat H265
0:00:09.883211475  1085     0x25fbe360 WARN                    v4l2 gstv4l2object.c:3051:gst_v4l2_object_get_nearest_size:<nvv4l2decoder1:sink> Unable to try format: Unknown error -1
0:00:09.883211790  1085 0x7fd2100951e0 WARN                    v4l2 gstv4l2object.c:3051:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:09.883225166  1085     0x25fbe360 WARN                    v4l2 gstv4l2object.c:2943:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder1:sink> Could not probe maximum capture size for pixelformat MPG2
0:00:09.883236042  1085 0x7fd2100951e0 WARN                    v4l2 gstv4l2object.c:2937:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat VP90
0:00:09.883253292  1085 0x7fd2100951e0 WARN                    v4l2 gstv4l2object.c:3051:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:09.883254772  1085     0x25fbe360 WARN                    v4l2 gstv4l2object.c:3051:gst_v4l2_object_get_nearest_size:<nvv4l2decoder1:sink> Unable to try format: Unknown error -1
0:00:09.883266289  1085 0x7fd2100951e0 WARN                    v4l2 gstv4l2object.c:2943:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat VP90
0:00:09.883279421  1085     0x25fbe360 WARN                    v4l2 gstv4l2object.c:2937:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder1:sink> Could not probe minimum capture size for pixelformat H265
0:00:09.883293566  1085 0x7fd2100951e0 WARN                    v4l2 gstv4l2object.c:3051:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:09.883298544  1085     0x25fbe360 WARN                    v4l2 gstv4l2object.c:3051:gst_v4l2_object_get_nearest_size:<nvv4l2decoder1:sink> Unable to try format: Unknown error -1
0:00:09.883300717  1085 0x7fd2100951e0 WARN                    v4l2 gstv4l2object.c:2937:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat VP80
0:00:09.883312672  1085     0x25fbe360 WARN                    v4l2 gstv4l2object.c:2943:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder1:sink> Could not probe maximum capture size for pixelformat H265
0:00:09.883333145  1085 0x7fd2100951e0 WARN                    v4l2 gstv4l2object.c:3051:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:09.883335164  1085     0x25fbe360 WARN                    v4l2 gstv4l2object.c:3051:gst_v4l2_object_get_nearest_size:<nvv4l2decoder1:sink> Unable to try format: Unknown error -1
0:00:09.883346392  1085 0x7fd2100951e0 WARN                    v4l2 gstv4l2object.c:2943:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat VP80
0:00:09.883357629  1085     0x25fbe360 WARN                    v4l2 gstv4l2object.c:2937:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder1:sink> Could not probe minimum capture size for pixelformat VP90
0:00:09.883372975  1085 0x7fd2100951e0 WARN                    v4l2 gstv4l2object.c:3051:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:09.883377631  1085     0x25fbe360 WARN                    v4l2 gstv4l2object.c:3051:gst_v4l2_object_get_nearest_size:<nvv4l2decoder1:sink> Unable to try format: Unknown error -1
0:00:09.883386950  1085 0x7fd2100951e0 WARN                    v4l2 gstv4l2object.c:2937:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat H264
0:00:09.883400097  1085     0x25fbe360 WARN                    v4l2 gstv4l2object.c:2943:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder1:sink> Could not probe maximum capture size for pixelformat VP90
0:00:09.883410852  1085 0x7fd2100951e0 WARN                    v4l2 gstv4l2object.c:3051:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:09.883426851  1085     0x25fbe360 WARN                    v4l2 gstv4l2object.c:3051:gst_v4l2_object_get_nearest_size:<nvv4l2decoder1:sink> Unable to try format: Unknown error -1
0:00:09.883428306  1085 0x7fd2100951e0 WARN                    v4l2 gstv4l2object.c:2943:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat H264
0:00:09.883440944  1085     0x25fbe360 WARN                    v4l2 gstv4l2object.c:2937:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder1:sink> Could not probe minimum capture size for pixelformat VP80
0:00:09.883460114  1085     0x25fbe360 WARN                    v4l2 gstv4l2object.c:3051:gst_v4l2_object_get_nearest_size:<nvv4l2decoder1:sink> Unable to try format: Unknown error -1
0:00:09.883469718  1085     0x25fbe360 WARN                    v4l2 gstv4l2object.c:2943:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder1:sink> Could not probe maximum capture size for pixelformat VP80
0:00:09.883473538  1085 0x7fd2100951e0 WARN                    v4l2 gstv4l2object.c:3051:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:src> Unable to try format: Unknown error -1
0:00:09.883480821  1085 0x7fd2100951e0 WARN                    v4l2 gstv4l2object.c:2937:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:src> Could not probe minimum capture size for pixelformat NM12
0:00:09.883483200  1085     0x25fbe360 WARN                    v4l2 gstv4l2object.c:3051:gst_v4l2_object_get_nearest_size:<nvv4l2decoder1:sink> Unable to try format: Unknown error -1
0:00:09.883493514  1085 0x7fd2100951e0 WARN                    v4l2 gstv4l2object.c:3051:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:src> Unable to try format: Unknown error -1
0:00:09.883507005  1085     0x25fbe360 WARN                    v4l2 gstv4l2object.c:2937:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder1:sink> Could not probe minimum capture size for pixelformat H264
0:00:09.883517512  1085 0x7fd2100951e0 WARN                    v4l2 gstv4l2object.c:2943:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:src> Could not probe maximum capture size for pixelformat NM12
0:00:09.883525228  1085     0x25fbe360 WARN                    v4l2 gstv4l2object.c:3051:gst_v4l2_object_get_nearest_size:<nvv4l2decoder1:sink> Unable to try format: Unknown error -1
0:00:09.883537842  1085 0x7fd2100951e0 WARN                    v4l2 gstv4l2object.c:2388:gst_v4l2_object_add_interlace_mode:0x7fd20002c110 Failed to determine interlace mode
0:00:09.883548299  1085     0x25fbe360 WARN                    v4l2 gstv4l2object.c:2943:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder1:sink> Could not probe maximum capture size for pixelformat H264
0:00:09.883590612  1085     0x25fbe360 WARN                    v4l2 gstv4l2object.c:3051:gst_v4l2_object_get_nearest_size:<nvv4l2decoder1:src> Unable to try format: Unknown error -1
0:00:09.883600461  1085     0x25fbe360 WARN                    v4l2 gstv4l2object.c:2937:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder1:src> Could not probe minimum capture size for pixelformat NM12
0:00:09.883609753  1085     0x25fbe360 WARN                    v4l2 gstv4l2object.c:3051:gst_v4l2_object_get_nearest_size:<nvv4l2decoder1:src> Unable to try format: Unknown error -1
0:00:09.883619258  1085     0x25fbe360 WARN                    v4l2 gstv4l2object.c:2943:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder1:src> Could not probe maximum capture size for pixelformat NM12
0:00:09.883630388  1085     0x25fbe360 WARN                    v4l2 gstv4l2object.c:2388:gst_v4l2_object_add_interlace_mode:0x7fd20c02a020 Failed to determine interlace mode
0:00:09.996717865  1085 0x7fd2100951e0 WARN            v4l2videodec gstv4l2videodec.c:1685:gst_v4l2_video_dec_decide_allocation:<nvv4l2decoder0> Duration invalid, not setting latency
0:00:09.996760550  1085 0x7fd2100951e0 WARN          v4l2bufferpool gstv4l2bufferpool.c:1065:gst_v4l2_buffer_pool_start:<nvv4l2decoder0:pool:src> Uncertain or not enough buffers, enabling copy threshold
0:00:09.996868848  1085     0x25fbe360 WARN            v4l2videodec gstv4l2videodec.c:1685:gst_v4l2_video_dec_decide_allocation:<nvv4l2decoder1> Duration invalid, not setting latency
0:00:09.996894372  1085     0x25fbe360 WARN          v4l2bufferpool gstv4l2bufferpool.c:1065:gst_v4l2_buffer_pool_start:<nvv4l2decoder1:pool:src> Uncertain or not enough buffers, enabling copy threshold
0:00:09.997250617  1085 0x7fd200023240 WARN          v4l2bufferpool gstv4l2bufferpool.c:1512:gst_v4l2_buffer_pool_dqbuf:<nvv4l2decoder0:pool:src> Driver should never set v4l2_buffer.field to ANY
0:00:09.997353230  1085 0x7fd200023aa0 WARN          v4l2bufferpool gstv4l2bufferpool.c:1512:gst_v4l2_buffer_pool_dqbuf:<nvv4l2decoder1:pool:src> Driver should never set v4l2_buffer.field to ANY
TOT FPS 0.0   AVG FPS 0   STREAMS UP 0
Objects found in metadata after pgie: 4
Objects found in metadata after pgie: 4
TOT FPS 0.4   AVG FPS 0   STREAMS UP 2
Objects found in metadata after pgie: 4
Objects found in metadata after pgie: 4
Saving pipeline in folder /shared/graph
Files in dst folder: ['pipeline.dot', 'pipeline.pdf']
TOT FPS 0.4   AVG FPS 0   STREAMS UP 2
Objects found in metadata after pgie: 4
Objects found in metadata after pgie: 4
TOT FPS 0.4   AVG FPS 0   STREAMS UP 2
Objects found in metadata after pgie: 4
Objects found in metadata after pgie: 4
Objects found in metadata after pgie: 4
Objects found in metadata after pgie: 4
TOT FPS 0.8   AVG FPS 0   STREAMS UP 2
Objects found in metadata after pgie: 4
Objects found in metadata after pgie: 4
{'class_id': 2,
 'classifier_meta_list': [],
 'confidence': 0.7046268582344055,
 'detector_bbox_info': {'height': 0.0, 'left': 0.0, 'top': 0.0, 'width': 0.0},
 'misc_obj_info': array([0, 0, 0, 0], dtype=int32),
 'obj_label': 'car',
 'obj_user_meta_list': {},
 'object_id': 0,
 'parent': None,
 'rect_params': {'bg_color': <pyds.NvOSD_ColorParams object at 0x7fd5853d9670>,
                 'border_color': <pyds.NvOSD_ColorParams object at 0x7fd5853d96b0>,
                 'border_width': 3,
                 'color_id': 0,
                 'has_bg_color': 0,
                 'has_color_info': 0,
                 'height': 58.99993133544922,
                 'left': 724.0,
                 'reserved': 0,
                 'top': 667.0,
                 'width': 66.0},
 'reserved': array([0, 0, 0, 0], dtype=int32),
 'text_params': <pyds.NvOSD_TextParams object at 0x7fd5853d9930>,
 'tracker_bbox_info': {'height': 58.99993133544922,
                       'left': 724.0,
                       'top': 667.0,
                       'width': 66.0},
 'tracker_confidence': 1.0,
 'unique_component_id': 1}

....
CONTINUE PRINTING DETECTED OBJECTS
....

SETUP
I am working in the Deepstream 6.0 Triton container with a Tesla T4.

Note
Note: I have already looked at the examples in your resository, but I can’t fix the issue:

pipeline.pdf (42.2 KB)

Sorry for the late response, it this still an issue to support? Thanks

Hi @kayccc , yes it is. Thank you.

The Gstnvvideoconvert convertor_0 has been linked to capsfilter, so it is the capsfilter who needs to be linked to NvDsOSD.

I would need to connect convertor_0 to onscreendisplay_0 otherwise if I connect filter_numpy_frame_0 to convertor_0 wouldn’t the input to onscreendisplay_0 be filtered? I am not sure if the capsfilter remove things from the pipeline.

I found out that Gstreamer provides the tee element to duplicate the output of an element. I tried to use it but the result was unsuccessful: neither branch creates any output. In fact:

  • The branch that should generate numpy array does not produce any
  • The branch that should output the rtsp stream does not produce any (I cannot stream it using ffmpeg nor I can see its specs using ffprobe)

See the pipeline attached.
Note: the attached pipeline only show on camera. In reality I am processing 50 cameras, that’s why you will see an nvinferserver element with batch size equal to 16. Note that I have the same issue even with 50 camera.
pipeline.pdf (44.5 KB)

I found an error in the generation of RTSP output stream. Now with the above attached pipeline ( https://forums.developer.nvidia.com/uploads/short-url/zkJal4l2F54vCgxc8TRtjX6lFAy.pdf ) the output RTSP video works (although sometimes the lower part of the stream if pixelated).

To sum up, consider the pipeline here https://forums.developer.nvidia.com/uploads/short-url/zkJal4l2F54vCgxc8TRtjX6lFAy.pdf . As it is, only the RTSP branch works. But the probe attached to filter_numpy_frame_0 cannot get any data. If I remove post_inference_tee_queue_numpy_0 both branches work. Why would that be?

In any case there is still the issue with rtsp output pixelated (the bottom part of the video is missing / pixelated).

Thank you!