Batch-size >1 in Nvstreamux throws gst_nvvideoconvert_transform: buffer transform failed

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) - Tesla T4 GPU
• DeepStream Version - 7.0.0
• JetPack Version (valid for Jetson only)
• TensorRT Version - 8.6.1.6-1+cuda12.0
• NVIDIA GPU Driver Version (valid for GPU only) - Build cuda_12.2.r12.2/compiler.33191640_0
• Issue Type( questions, new requirements, bugs)
it throws the following error when using a batch-size anything greater than 1 in nvstreammux

deepstream_python_app | 0:00:09.433510990 104 0x741780000da0 ERROR nvvideoconvert gstnvvideoconvert.c:4235:gst_nvvideoconvert_transform: buffer transform failed
deepstream_python_app | 0:00:09.476271576 104 0x5e4607d4e8c0 WARN nvinfer gstnvinfer.cpp:2420:gst_nvinfer_output_loop: error: Internal data stream error.
deepstream_python_app | 0:00:09.476299318 104 0x5e4607d4e8c0 WARN nvinfer gstnvinfer.cpp:2420:gst_nvinfer_output_loop: error: streaming stopped, reason error (-5)

• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
gst-launch-1.0

nvstreammux name=nvstreammux0 batch-size=4 batched-push-timeout=40000 width=1920 height=1080 live-source=TRUE ! queue ! nvvideoconvert ! queue ! nvinfer batch-size = 4 config-file-path=“/workspace/config_files/DeepStream-Yolo/config_infer_primary_yoloV8.txt” model-engine-file=“/workspace/config_files/DeepStream-Yolo/model_b16_gpu0_fp16.engine” ! queue ! nvdsosd ! queue ! nvvideoconvert ! video/x-raw,format=BGR ! appsink name=appsink0

uridecodebin3 uri=“file:///workspace/classroom.mp4” ! queue ! nvstreammux0.sink_0
uridecodebin3 uri=“file:///workspace/classroom.mp4” ! queue ! nvstreammux0.sink_1
uridecodebin3 uri=“file:///workspace/classroom.mp4” ! queue ! nvstreammux0.sink_2
uridecodebin3 uri=“file:///workspace/classroom.mp4” ! queue ! nvstreammux0.sink_3

Hi @akash.g

I think this is not related too much with the batch-size in nvstreammux, but with the fact that you are trying to convert a buffer that is batched into host memory, if you insert a nvmultistreamtiler before nvdsosd which makes a composition of the 4 frames you will see that the problem goes away. You can also use a nvstreamdemux to “debatch” the batch back to the original sources and then compose them together, depends on your use case

Regards,
Allan Navarro

Embedded SW Engineer at RidgeRun

Contact us: support@ridgerun.com
Developers wiki: https://developer.ridgerun.com/
Website: www.ridgerun.com

Hi Allen,
Thanks for the reply. nvmultistreamtiler did resolve the error for us. We are getting all 4 streams in a single image as a 2D tile. Now is there a way to send one of these frames (with bounding boxes) to some endpoint, when there is a detection? Or we have to use nvstreamdemux for that?

Hi @akash.g

I think the best here would be to use a nvstreamdemux somewhere after the nvsdosd. You can also use a tee if you need to use the batch somewhere else.

Regards,
Allan Navarro

Embedded SW Engineer at RidgeRun

Contact us: support@ridgerun.com
Developers wiki: https://developer.ridgerun.com/
Website: www.ridgerun.com

Please refer to /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test3 for the correct pipeline and Gst-nvstreamdemux — DeepStream documentation for the nvstreamdemux use cases.

@allan.navarro @Fiona.Chen This pipeline is working fine with local files.

    uridecodebin3 uri="file:///workspace/full_auto_AK47.mp4" name=decodebin0 ! queue ! nvstreammux0.sink_0
    
    nvstreammux name=nvstreammux0 batch-size=1 width=1920 height=1080 live-source=True sync-inputs=False batched-push-timeout=40000 ! queue ! nvinfer batch-size=1 config-file-path="/workspace/config_files/DeepStream-Yolo/config_infer_primary_yoloV8s-weapon.txt" model-engine-file="/workspace/config_files/DeepStream-Yolo/model_b16_gpu0_fp16.engine" ! queue ! nvmultistreamtiler width=1920 height=1080 rows=1 columns=1 ! nvdsosd ! nvvideoconvert ! video/x-raw,format=BGR ! appsink name=appsink0

but when I add an rtsp link instead of file:///workspace/full_auto_AK47.mp4, it doesn’t work. I am sharing the log file below for you to review. generated using GST_DEBUG=*:6.
logs.log (5.1 MB)

0:00:07.921203552 e[33m  104e[00m 0x62c0417c93b0 e[37mDEBUG  e[00m e[00m            basesink gstbasesink.c:1280:gst_base_sink_query_latency:<appsink0>e[00m latency query failed but we are not live

Could you try to set the async and sync to False for the appsink plugin?

using sync=False(logs.log) and using sync=False and async=False(logs2.log) doesn’t work.
logs.log (5.1 MB)
logs2.log (5.1 MB)

Hi @akash.g

Could you share the uri, also have you tried other elements like nvurisrcbin. It is also a good idea to check if the rtsp stream manually with rtspsrc to make sure you are able to read it

Regards,
Allan Navarro

Embedded SW Engineer at RidgeRun

Contact us: support@ridgerun.com
Developers wiki: https://developer.ridgerun.com/
Website: www.ridgerun.com

Hi @allan.navarro
Thanks for replying. Here is the uri rtsp://admin:Cisco1947@3.129.17.212:10554/Streaming/Channels/101, it will be live for few more hours(1-2hrs).

No, we have not tried nvurisrcbin yet. But dont think it is the issue because it is working fine when running in terminal with fake sink.

uridecodebin3 uri="rtsp://admin:Cisco1947@3.129.17.212:10554/Streaming/Channels/101" ! queue ! nvstreammux0.sink_0 nvstreammux name=nvstreammux0 batch-size=1 width=1920 height=1080 live-source=True sync-inputs=False batched-push-timeout=40000 ! queue ! nvinfer batch-size=1 config-file-path="/workspace/config_files/DeepStream-Yolo/config_infer_primary_yoloV8s-weapon.txt" model-engine-file="/workspace/config_files/DeepStream-Yolo/model_b16_gpu0_fp16.engine" ! queue ! nvmultistreamtiler width=1920 height=1080 rows=4 columns=4 ! nvdsosd ! nvvideoconvert ! video/x-raw,format=BGR ! fakesink

`

but when running with python, and appsink at the end, it is not working. Log file for the same is shared above.

inside python code the pipeline looks like this:

    uridecodebin3 uri="rtsp://admin:Cisco1947@3.129.17.212:10554/Streaming/Channels/101" ! queue ! nvstreammux0.sink_0 nvstreammux name=nvstreammux0 batch-size=1 width=1920 height=1080 live-source=True sync-inputs=False batched-push-timeout=40000 ! queue ! nvinfer batch-size=1 config-file-path="/workspace/config_files/DeepStream-Yolo/config_infer_primary_yoloV8s-weapon.txt" model-engine-file="/workspace/config_files/DeepStream-Yolo/model_b16_gpu0_fp16.engine" ! queue ! nvmultistreamtiler width=1920 height=1080 rows=4 columns=4 ! nvdsosd ! nvvideoconvert ! video/x-raw,format=BGR ! appsink name=appsink0 sync=False async=False

Please help us resolve this, thanks again.

Let’s first analyze this problem with gst-launch-1.0 command.

  1. When using a localfile, the pipeline can work normally
  2. When using fakesink with your rtsp source, the pipeline also work normally

Is that right? Could you just try the pipeline below?

gst-launch-1.0 uridecodebin3 uri="rtsp://admin:Cisco1947@3.129.17.212:10554/Streaming/Channels/101" ! queue ! nvvideoconvert ! video/x-raw,format=BGR ! appsink

The pipeline you gave works fine in terminal and it shows timer running. But when we run the same in python with a bit of modification, we don’t get any output. We are putting the code for your reference. it is not letting me upload python code so uploading in txt.

test.txt (3.8 KB)

I have tried your pipeline with gst-launch-1.0 in your code. It worked normally. It may be a problem that the other code causes, which you need to check yourself.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.