Streammux set property error in spite of being set

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I am getting the following error:

Creating Pipeline

Creating streamux

Creating source_bin 0

Creating source_bin 1

Creating Tee

Creating nvstreamdemux

Creating the pgie

Creating gst-dsmetamux

Creating tiler

Creating nvvidconv

Creating nvosd

Creating Code Parser

Creating Container

Creating FILESINK

WARNING: Overriding infer-config batch-size 1 with number of sources 2

Adding elements to Pipeline

0:00:00.050682174 367 0x2605f00 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:encoder:sink Unable to try format: Unknown error -1
0:00:00.050702684 367 0x2605f00 WARN v4l2 gstv4l2object.c:2942:gst_v4l2_object_probe_caps_for_format:encoder:sink Could not probe minimum capture size for pixelformat YM12
0:00:00.050707286 367 0x2605f00 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:encoder:sink Unable to try format: Unknown error -1
0:00:00.050711117 367 0x2605f00 WARN v4l2 gstv4l2object.c:2948:gst_v4l2_object_probe_caps_for_format:encoder:sink Could not probe maximum capture size for pixelformat YM12
0:00:00.050719570 367 0x2605f00 WARN v4l2 gstv4l2object.c:2395:gst_v4l2_object_add_interlace_mode:0x25e5a80 Failed to determine interlace mode
0:00:00.050729670 367 0x2605f00 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:encoder:sink Unable to try format: Unknown error -1
0:00:00.050734122 367 0x2605f00 WARN v4l2 gstv4l2object.c:2942:gst_v4l2_object_probe_caps_for_format:encoder:sink Could not probe minimum capture size for pixelformat NM12
0:00:00.050739142 367 0x2605f00 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:encoder:sink Unable to try format: Unknown error -1
0:00:00.050744929 367 0x2605f00 WARN v4l2 gstv4l2object.c:2948:gst_v4l2_object_probe_caps_for_format:encoder:sink Could not probe maximum capture size for pixelformat NM12
0:00:00.050750625 367 0x2605f00 WARN v4l2 gstv4l2object.c:2395:gst_v4l2_object_add_interlace_mode:0x25e5a80 Failed to determine interlace mode
0:00:00.050773864 367 0x2605f00 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:encoder:src Unable to try format: Unknown error -1
0:00:00.050779780 367 0x2605f00 WARN v4l2 gstv4l2object.c:2942:gst_v4l2_object_probe_caps_for_format:encoder:src Could not probe minimum capture size for pixelformat H265
0:00:00.050782944 367 0x2605f00 WARN v4l2 gstv4l2object.c:3057:gst_v4l2_object_get_nearest_size:encoder:src Unable to try format: Unknown error -1
0:00:00.050786354 367 0x2605f00 WARN v4l2 gstv4l2object.c:2948:gst_v4l2_object_probe_caps_for_format:encoder:src Could not probe maximum capture size for pixelformat H265
0:00:00.050917437 367 0x2605f00 WARN nvstreammux gstnvstreammux.c:2809:gst_nvstreammux_change_state: error: Output width not set
Error: gst-library-error-quark: Output width not set (5): gstnvstreammux.c(2809): gst_nvstreammux_change_state (): /GstPipeline:pipeline1/GstNvStreamMux:Stream-muxervehiclecount

Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
• The pipeline being used

• Hardware Platform (Jetson / GPU) - GPU
• DeepStream Version 6.1.
• JetPack Version (valid for Jetson only)
• TensorRT Version tensorrt 8.4.1.5
• NVIDIA GPU Driver Version (valid for GPU only) Driver Version: 525.78.01
• Issue Type( questions, new requirements, bugs) trying to implement the parallel model inferencing in python
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
• The pipeline being used

nvidiavm_2.py (11.7 KB)

Could you attach the code that can run normally with the least code that can reproduce this problem? You can print the paras first: self.input_width and self.input_height.

nvidiavm_2.py (27.7 KB)

Here is the entire code!

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This code doesn’t work properly in my environment. It maybe miss some necessary modules. You can try to provide us the code that can run properly.
Or you can just refer to our demo code and check if it has problems in your env.
https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/tree/master/apps

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.