Nvvideoconvert_issue

Hello i would like to know how to solve this issue, this happen always i change the batch size of nvvstreammux, here you can find the error and also the pipeline to reproduce the issue.
Please let me know how to solve.

•Jetapck 4.5
•DS 5.0
•Jetson Xavier AGX
•Num Sources 2
•Inference model Peoplenet
•GST Debug DEBUG LEVEL = 1*

•ISSUE and scenario description

•STREAM FROM RTSP
Working scenario
Batch size 1
batched-push-timeout =-1
pipeline:

gst-launch-1.0 -v uridecodebin uri=“rtsp://root:pass@192.168.3.109/axis-media/media.amp?resolution=1920x1080” ! queue ! nvstreammux0.sink_0 nvstreammux name=nvstreammux0 batch-size=1 batched-push-timeout=-1 width=1920 height=1080 live-source=1 ! queue ! nvinfer config-file-path=/srv/MLManager/media/uploads/models/model_2e5ee5f5-ef21-4c2c-b769-dff6ef7c2ac2/file.inference model-engine-file=/srv/MLManager/media/uploads/models/model_2e5ee5f5-ef21-4c2c-b769-dff6ef7c2ac2/file.model ! queue ! nvvideoconvert ! video/x-raw,format=RGBA ! videoconvert ! fakesink silent=false uridecodebin uri=“rtsp://root:pass@192.168.3.109/axis-media/media.amp?resolution=1920x1080” ! queue ! nvstreammux0.sink_1

Not Working scenario
Batch size 2
push timeout =40000
pipeline:

gst-launch-1.0 -v uridecodebin uri=“rtsp://root:pass@192.168.3.109/axis-media/media.amp?resolution=1920x1080” ! queue ! nvstreammux0.sink_0 nvstreammux name=nvstreammux0 batch-size=2 batched-push-timeout=40000 width=1920 height=1080 live-source=1 ! queue ! nvinfer config-file-path=/srv/MLManager/media/uploads/models/model_2e5ee5f5-ef21-4c2c-b769-dff6ef7c2ac2/file.inference model-engine-file=/srv/MLManager/media/uploads/models/model_2e5ee5f5-ef21-4c2c-b769-dff6ef7c2ac2/file.model ! queue ! nvvideoconvert ! video/x-raw,format=RGBA ! videoconvert ! fakesink silent=false uridecodebin uri=“rtsp://root:pass@192.168.3.109/axis-media/media.amp?resolution=1920x1080” ! queue ! nvstreammux0.sink_1

•STREAM FROM FILE
Working scenario
Batch size 1
batched-push-timeout =-1
pipeline:

gst-launch-1.0 -v uridecodebin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! queue ! nvstreammux0.sink_0 nvstreammux name=nvstreammux0 batch-size=1 batched-push-timeout=-1 width=1920 height=1080 live-source=0 ! queue ! nvinfer config-file-path=/srv/MLManager/media/uploads/models/model_2e5ee5f5-ef21-4c2c-b769-dff6ef7c2ac2/file.inference model-engine-file=/srv/MLManager/media/uploads/models/model_2e5ee5f5-ef21-4c2c-b769-dff6ef7c2ac2/file.model ! queue ! nvvideoconvert ! video/x-raw,format=RGBA ! videoconvert ! fakesink silent=false uridecodebin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! queue ! nvstreammux0.sink_1

Not Working scenario
Batch size 2
batched-push-timeout =40000
pipeline:

gst-launch-1.0 -v uridecodebin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! queue ! nvstreammux0.sink_0 nvstreammux name=nvstreammux0 batch-size=2 batched-push-timeout=40000 width=1920 height=1080 live-source=0 ! queue ! nvinfer config-file-path=/srv/MLManager/media/uploads/models/model_2e5ee5f5-ef21-4c2c-b769-dff6ef7c2ac2/file.inference model-engine-file=/srv/MLManager/media/uploads/models/model_2e5ee5f5-ef21-4c2c-b769-dff6ef7c2ac2/file.model ! queue ! nvvideoconvert ! video/x-raw,format=RGBA ! videoconvert ! fakesink silent=false uridecodebin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! queue ! nvstreammux0.sink_1

•ISSUE DETECTED
0:00:05.662097222 8758 0x558ea54000 ERROR nvvideoconvert gstnvvideoconvert.c:3120:gst_nvvideoconvert_transform: buffer transform failed
0:00:05.685226044 8758 0x558ec6c0f0 WARN nvinfer gstnvinfer.cpp:1975:gst_nvinfer_output_loop: error: Internal data stream error.
0:00:05.685351934 8758 0x558ec6c0f0 WARN nvinfer gstnvinfer.cpp:1975:gst_nvinfer_output_loop: error: streaming stopped, reason error (-5)
ERROR: from element /GstPipeline:pipeline0/GstNvInfer:nvinfer0: Internal data stream error.

•JETPACK 4.6
•DS 6.0
•Jetson Xavier AGX
•Num Sources 2
•OLD_STREAM_MUX USED not BETA
•Inference model Peoplenet
•GST Debug DEBUG LEVEL = 1

•STREAM FROM RTSP
Working scenario
Batch size 1
batched-push-timeout =-1
pipeline:

gst-launch-1.0 -v uridecodebin uri=“rtsp://root:pass@192.168.3.109/axis-media/media.amp?resolution=1920x1080” ! queue ! nvstreammux0.sink_0 nvstreammux name=nvstreammux0 batch-size=1 batched-push-timeout=-1 width=1920 height=1080 live-source=1 ! queue ! nvinfer config-file-path=/srv/MLManager/media/uploads/models/model_2e5ee5f5-ef21-4c2c-b769-dff6ef7c2ac2/file.inference model-engine-file=/srv/MLManager/media/uploads/models/model_2e5ee5f5-ef21-4c2c-b769-dff6ef7c2ac2/file.model ! queue ! nvvideoconvert ! video/x-raw,format=RGBA ! videoconvert ! fakesink silent=false uridecodebin uri=“rtsp://root:pass@192.168.3.109/axis-media/media.amp?resolution=1920x1080” ! queue ! nvstreammux0.sink_1

Not Working scenario
Batch size 2
push timeout =40000
pipeline:

gst-launch-1.0 -v uridecodebin uri=“rtsp://root:pass@192.168.3.109/axis-media/media.amp?resolution=1920x1080” ! queue ! nvstreammux0.sink_0 nvstreammux name=nvstreammux0 batch-size=2 batched-push-timeout=40000 width=1920 height=1080 live-source=1 ! queue ! nvinfer config-file-path=/srv/MLManager/media/uploads/models/model_2e5ee5f5-ef21-4c2c-b769-dff6ef7c2ac2/file.inference model-engine-file=/srv/MLManager/media/uploads/models/model_2e5ee5f5-ef21-4c2c-b769-dff6ef7c2ac2/file.model ! queue ! nvvideoconvert ! video/x-raw,format=RGBA ! videoconvert ! fakesink silent=false uridecodebin uri=“rtsp://root:pass@192.168.3.109/axis-media/media.amp?resolution=1920x1080” ! queue ! nvstreammux0.sink_1

•STREAM FROM FILE
Working scenario
Batch size 1
batched-push-timeout =-1
pipeline:

gst-launch-1.0 -v uridecodebin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! queue ! nvstreammux0.sink_0 nvstreammux name=nvstreammux0 batch-size=1 batched-push-timeout=-1 width=1920 height=1080 live-source=0 ! queue ! nvinfer config-file-path=/srv/MLManager/media/uploads/models/model_2e5ee5f5-ef21-4c2c-b769-dff6ef7c2ac2/file.inference model-engine-file=/srv/MLManager/media/uploads/models/model_2e5ee5f5-ef21-4c2c-b769-dff6ef7c2ac2/file.model ! queue ! nvvideoconvert ! video/x-raw,format=RGBA ! videoconvert ! fakesink silent=false uridecodebin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! queue ! nvstreammux0.sink_1

Not Working scenario
Batch size 2
batched-push-timeout =40000
pipeline:

gst-launch-1.0 -v uridecodebin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! queue ! nvstreammux0.sink_0 nvstreammux name=nvstreammux0 batch-size=2 batched-push-timeout=40000 width=1920 height=1080 live-source=0 ! queue ! nvinfer config-file-path=/srv/MLManager/media/uploads/models/model_2e5ee5f5-ef21-4c2c-b769-dff6ef7c2ac2/file.inference model-engine-file=/srv/MLManager/media/uploads/models/model_2e5ee5f5-ef21-4c2c-b769-dff6ef7c2ac2/file.model ! queue ! nvvideoconvert ! video/x-raw,format=RGBA ! videoconvert ! fakesink silent=false uridecodebin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! queue ! nvstreammux0.sink_1

•ISSUE DETECTED
0:00:05.662097222 8758 0x558ea54000 ERROR nvvideoconvert gstnvvideoconvert.c:3120:gst_nvvideoconvert_transform: buffer transform failed
0:00:05.685226044 8758 0x558ec6c0f0 WARN nvinfer gstnvinfer.cpp:1975:gst_nvinfer_output_loop: error: Internal data stream error.
0:00:05.685351934 8758 0x558ec6c0f0 WARN nvinfer gstnvinfer.cpp:1975:gst_nvinfer_output_loop: error: streaming stopped, reason error (-5)
ERROR: from element /GstPipeline:pipeline0/GstNvInfer:nvinfer0: Internal data stream error.

Hello,
you can simply reproduce the issue described above using these two following pipelines.
I used a standard people net model and also the standard file path that you can find in the /opt/nvidia/deepstream folder.
Please let me know because is really necessary and urgent for us to solve this issue.

Hope to hear from you really soon.

WORKING SCENARIO

gst-launch-1.0 -v uridecodebin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! queue ! nvstreammux0.sink_0 nvstreammux name=nvstreammux0 batch-size=1 batched-push-timeout=-1 width=1920 height=1080 live-source=1 ! queue ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream/samples/configs/tlt_pretrained_models/config_infer_primary_peoplenet.txt model-engine-file=/opt/nvidia/deepstream/deepstream/samples/models/tlt_pretrained_models/peoplenet/resnet34_peoplenet_pruned.etlt_b1_gpu0_fp16.engine ! queue ! nvvideoconvert ! video/x-raw,format=RGBA ! videoconvert ! fakesink silent=false uridecodebin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! queue ! nvstreammux0.sink_1

NOT WORKING SCENARIO

gst-launch-1.0 -v uridecodebin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! queue ! nvstreammux0.sink_0 nvstreammux name=nvstreammux0 batch-size=2 batched-push-timeout=40000 width=1920 height=1080 live-source=1 ! queue ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream/samples/configs/tlt_pretrained_models/config_infer_primary_peoplenet.txt model-engine-file=/opt/nvidia/deepstream/deepstream/samples/models/tlt_pretrained_models/peoplenet/resnet34_peoplenet_pruned.etlt_b1_gpu0_fp16.engine ! queue ! nvvideoconvert ! video/x-raw,format=RGBA ! videoconvert ! fakesink silent=false uridecodebin uri=file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! queue ! nvstreammux0.sink_1

Hi,

I have exactly the same issue. Why nvvideoconvert fails if batch-size != 1? Can you please check what happen in gstnvvideoconvert.c at line 3120? Is there any workaround or solution? Thanks

Hi.
I have the same issue.

I am running the same type of experiment, using a batch-size > 1 and using nvvideoconvert in my pipeline and I get the same error.

Did you find a solution?

Hello,
not yet, we are waiting for the NVIDIA Team support.
@pshin @Morganh @rsc44 @DaneLLL
Could you please help us to solve the issue?

@Rik @ile

It looks like you have videoconvert running on a muxed (combined) feeds, which runs on the GPU memory.

So the solution is to use nvstreamdemux and ‘split’ the feeds apart, and pass it back to the CPU before using videoconvert, like in the figure below.

uridecodebin → queue → streammux.sink0 | nvstreammux → nvinfer → queue → nvstreamdemux | demux.src0 → queue → “video/x-raw(memory:NVMM), format=NV12” → nvvideoconvert - > “video/x-raw, format=RGBA” → videoconvert → fakesink

1 Like

For your pipeline you’d ideally want something like this below.

As it looks like you are ending in a fakesink, there’s no need to demux/convert it after nvinfer.

gst-launch-1.0 \
uridecodebin ! nvvideoconvert ! “video/x-raw(memory:NVMM), width=1920, height=1080, format=NV12” ! m.sink_0 \
uridecodebin ! nvvideoconvert ! “video/x-raw(memory:NVMM), width=1920, height=1080, format=NV12” ! m.sink_1 \
nvstreammux name=m batch-size=2 ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream/samples/configs/tlt_pretrained_models/config_infer_primary_peoplenet.txt model-engine-file=/opt/nvidia/deepstream/deepstream/samples/models/tlt_pretrained_models/peoplenet/resnet34_peoplenet_pruned.etlt_b1_gpu0_fp16.engine ! queue ! fakesink async=0

Also, just reference this document in the future.

@rsc44 In my use case I avoided the nvstreamdemux as it didn’t support adding/removing streams dynamically

Is the limitation described here still relevant?

In my application I demux the streams manually and set the correct timestamps. If the above limitation is still relevant, can you please suggest an alternative solution? For example if I write a GStreamer element that removes the batch-size from the caps will the nvvideconvert work? Thanks

Please upgrade to DeepStream 6.0 GA.

1 Like

@Fiona.Chen nice to know that DeepStream 6.0 has finally fixed this bug. For now I still need DS5 compatibility so I modified my application to request the maximum number of demuxer pads I need in NULL state and then I link/unlink them as required

@Rik After nvstreammux, the buffers are batched data in HW buffers. Nvvideoconvert can not convert the batched data in HW buffers to none-batched data in SW buffers. It can only convert batched data in HW buffers to batched data in HW buffers, or convert the none-batched data in HW buffers to the none-batched data in HW/SW buffers. When you try to convert the batched data in HW buffers to the none batched data in SW buffers, you need nvmultistreamtiler or nvstreamdemux to convert the batched data to none-batched data. The public gstreamer plugins such as videoconvert can only handle ordinary data(none-batched data) in SW buffers.

So your pipelines are wrong. Please add nvstreamdemux or nvmultistreamtiler in your pipeline. GStreamer Plugin Overview — DeepStream 6.0 Release documentation

Hello,
ok thanks, i’ll let you know asap.
Thank you

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.