Gst_nvds_buffer_pool_alloc_buffer: assertion 'mem' failed

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson
• DeepStream Version 5.0
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I’m trying to tweak deepstream-test3, so it will stream from multiple usb cameras. deepstream-test1-usbcam works as expected, so following suggestions from previous similar questions I have replaced elements in bin with similar to the deepstream-test1-usbcam: v4l2src->capsfilter(“video/x-raw, framerate=30/1”)->videoconvert->nvvideoconvert->capsfilter((“video/x-raw(memory:NVMM)”), then linked the bin by linking streamux’s sink with bin’s pad (or at least this was my intention), but didn’t manage to run

Getting error:
$ python3 deepstream_test_3_usb.py /dev/video0
Creating Pipeline
Creating streamux
Creating source_bin 0
Creating source bin
source-bin-00
Creating Source
Creating Pgie
Creating tiler
Creating nvvidconv
Creating nvosd
Creating transform
Creating EGLSink
Adding elements to Pipeline
Linking elements in the Pipeline
Now playing…
1 : /dev/video0
Starting pipeline
Using winsys: x11
ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-5.0/sources/deepstream_python_apps/apps/deepstream-test1-usbcam/…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine open error
0:00:01.759055648 22289 0x678f070 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1690> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/deepstream_python_apps/apps/deepstream-test1-usbcam/…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed
0:00:01.759222112 22289 0x678f070 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1797> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/deepstream_python_apps/apps/deepstream-test1-usbcam/…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed, try rebuild
0:00:01.759316096 22289 0x678f070 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
INFO: [TRT]: Reading Calibration Cache for calibrator: EntropyCalibration2
INFO: [TRT]: Generated calibration scales using calibration cache. Make sure that calibration cache has latest scales.
INFO: [TRT]: To regenerate calibration cache, please delete the existing one. TensorRT will generate a new calibration cache.
INFO: [TRT]:
INFO: [TRT]: --------------- Layers running on DLA:
INFO: [TRT]:
INFO: [TRT]: --------------- Layers running on GPU:
INFO: [TRT]: conv1 + activation_1/Relu, block_1a_conv_1 + activation_2/Relu, block_1a_conv_2, block_1a_conv_shortcut + add_1 + activation_3/Relu, block_2a_conv_1 + activation_4/Relu, block_2a_conv_2, block_2a_conv_shortcut + add_2 + activation_5/Relu, block_3a_conv_1 + activation_6/Relu, block_3a_conv_2, block_3a_conv_shortcut + add_3 + activation_7/Relu, block_4a_conv_1 + activation_8/Relu, block_4a_conv_2, block_4a_conv_shortcut + add_4 + activation_9/Relu, conv2d_cov, conv2d_cov/Sigmoid, conv2d_bbox,
INFO: [TRT]: Detected 1 inputs and 2 output network tensors.
ERROR: Serialize engine failed because of file path: /opt/nvidia/deepstream/deepstream-5.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine opened error
0:00:16.651856800 22289 0x678f070 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1743> [UID = 1]: failed to serialize cude engine to file: /opt/nvidia/deepstream/deepstream-5.0/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x368x640
1 OUTPUT kFLOAT conv2d_bbox 16x23x40
2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40
0:00:16.662597184 22289 0x678f070 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
nvbufsurface: invalid colorFormat 0
nvbufsurface: Error in allocating buffer
Error(-1) in buffer allocation

** (python3:22289): CRITICAL **: 05:28:18.853: gst_nvds_buffer_pool_alloc_buffer: assertion ‘mem’ failed
Error: gst-resource-error-quark: failed to activate bufferpool (13): gstbasetransform.c(1670): default_prepare_output_buffer (): /GstPipeline:pipeline0/GstBin:source-bin-00/Gstnvvideoconvert:convertor_src2:
failed to activate bufferpool

Here is the source deepstream_test_3_usb.py (15.5 KB)

I have spent a lot of time trying to resolve by reading other similar topics here, but this is as far as I could get till now. Hopefully someone could spot a problem here.
Thanks

Ok, I have managed to run it. In my original code it was creating ghost pad for each bin, but it was missing the dynamic target linking to that pad. Updated code adds ghost pad with known target:
src_pad = caps_vidconvsrc.get_static_pad(“src”)
bin_pad = nbin.add_pad(Gst.GhostPad.new(“src”, src_pad))

Unfortunately there are still some issues with this sample.

  1. First, it’s really slow compared to the provided test1-usbcam, even if I run single device. Here is the output for two cameras:
    aaeon@aaeon-desktop:/opt/nvidia/deepstream/deepstream-5.0/sources/deepstream_python_apps/apps/deepstream-test1-usbcam$ python3 deepstream_test_3_usb.py /dev/video0 /dev/video4
    Creating Pipeline
    Creating streamux
    Creating source_bin 0
    Creating source bin
    source-bin-00
    Creating Source
    Creating source_bin 1
    Creating source bin
    source-bin-01
    Creating Source
    Creating Pgie
    Creating tiler
    Creating nvvidconv
    Creating nvosd
    Creating transform
    Creating EGLSink
    WARNING: Overriding infer-config batch-size 1 with number of sources 2
    Adding elements to Pipeline
    Linking elements in the Pipeline
    Now playing…
    1 : /dev/video0
    2 : /dev/video4
    Starting pipeline
    Using winsys: x11
    ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-5.0/sources/deepstream_python_apps/apps/deepstream-test1-usbcam/…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine open error
    0:00:01.986852841 8805 0x37990520 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1690> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/deepstream_python_apps/apps/deepstream-test1-usbcam/…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed
    0:00:01.987050841 8805 0x37990520 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1797> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/deepstream_python_apps/apps/deepstream-test1-usbcam/…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed, try rebuild
    0:00:01.987147297 8805 0x37990520 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
    INFO: [TRT]: Reading Calibration Cache for calibrator: EntropyCalibration2
    INFO: [TRT]: Generated calibration scales using calibration cache. Make sure that calibration cache has latest scales.
    INFO: [TRT]: To regenerate calibration cache, please delete the existing one. TensorRT will generate a new calibration cache.
    INFO: [TRT]:
    INFO: [TRT]: --------------- Layers running on DLA:
    INFO: [TRT]:
    INFO: [TRT]: --------------- Layers running on GPU:
    INFO: [TRT]: conv1 + activation_1/Relu, block_1a_conv_1 + activation_2/Relu, block_1a_conv_2, block_1a_conv_shortcut + add_1 + activation_3/Relu, block_2a_conv_1 + activation_4/Relu, block_2a_conv_2, block_2a_conv_shortcut + add_2 + activation_5/Relu, block_3a_conv_1 + activation_6/Relu, block_3a_conv_2, block_3a_conv_shortcut + add_3 + activation_7/Relu, block_4a_conv_1 + activation_8/Relu, block_4a_conv_2, block_4a_conv_shortcut + add_4 + activation_9/Relu, conv2d_cov, conv2d_cov/Sigmoid, conv2d_bbox,
    INFO: [TRT]: Detected 1 inputs and 2 output network tensors.
    ERROR: Serialize engine failed because of file path: /opt/nvidia/deepstream/deepstream-5.0/samples/models/Primary_Detector/resnet10.caffemodel_b2_gpu0_int8.engine opened error
    0:00:19.432973523 8805 0x37990520 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1743> [UID = 1]: failed to serialize cude engine to file: /opt/nvidia/deepstream/deepstream-5.0/samples/models/Primary_Detector/resnet10.caffemodel_b2_gpu0_int8.engine
    INFO: [Implicit Engine Info]: layers num: 3
    0 INPUT kFLOAT input_1 3x368x640
    1 OUTPUT kFLOAT conv2d_bbox 16x23x40
    2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40

0:00:19.445275445 8805 0x37990520 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
Warning: gst-core-error-quark: A lot of buffers are being dropped. (13): gstbasesink.c(2902): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstEglGlesSink:nvvideo-renderer:
There may be a timestamping problem, or this computer is too slow.
Warning: gst-core-error-quark: A lot of buffers are being dropped. (13): gstbasesink.c(2902): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstEglGlesSink:nvvideo-renderer:
There may be a timestamping problem, or this computer is too slow.

  1. if I try to run with 4 cameras it fails with error:
    …skipping same log as above…
    Now playing…
    1 : /dev/video0
    2 : /dev/video4
    3 : /dev/video2
    4 : /dev/video6
    Starting pipeline
    Using winsys: x11
    ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-5.0/sources/deepstream_python_apps/apps/deepstream-test1-usbcam/…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine open error
    0:00:01.978827257 9446 0xa75c840 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1690> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/deepstream_python_apps/apps/deepstream-test1-usbcam/…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed
    0:00:01.978989664 9446 0xa75c840 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1797> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/deepstream_python_apps/apps/deepstream-test1-usbcam/…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed, try rebuild
    0:00:01.979060739 9446 0xa75c840 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
    INFO: [TRT]: Reading Calibration Cache for calibrator: EntropyCalibration2
    INFO: [TRT]: Generated calibration scales using calibration cache. Make sure that calibration cache has latest scales.
    INFO: [TRT]: To regenerate calibration cache, please delete the existing one. TensorRT will generate a new calibration cache.
    INFO: [TRT]:
    INFO: [TRT]: --------------- Layers running on DLA:
    INFO: [TRT]:
    INFO: [TRT]: --------------- Layers running on GPU:
    INFO: [TRT]: conv1 + activation_1/Relu, block_1a_conv_1 + activation_2/Relu, block_1a_conv_2, block_1a_conv_shortcut + add_1 + activation_3/Relu, block_2a_conv_1 + activation_4/Relu, block_2a_conv_2, block_2a_conv_shortcut + add_2 + activation_5/Relu, block_3a_conv_1 + activation_6/Relu, block_3a_conv_2, block_3a_conv_shortcut + add_3 + activation_7/Relu, block_4a_conv_1 + activation_8/Relu, block_4a_conv_2, block_4a_conv_shortcut + add_4 + activation_9/Relu, conv2d_cov, conv2d_cov/Sigmoid, conv2d_bbox,
    INFO: [TRT]: Detected 1 inputs and 2 output network tensors.
    ERROR: Serialize engine failed because of file path: /opt/nvidia/deepstream/deepstream-5.0/samples/models/Primary_Detector/resnet10.caffemodel_b4_gpu0_int8.engine opened error
    0:00:16.164164983 9446 0xa75c840 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1743> [UID = 1]: failed to serialize cude engine to file: /opt/nvidia/deepstream/deepstream-5.0/samples/models/Primary_Detector/resnet10.caffemodel_b4_gpu0_int8.engine
    INFO: [Implicit Engine Info]: layers num: 3
    0 INPUT kFLOAT input_1 3x368x640
    1 OUTPUT kFLOAT conv2d_bbox 16x23x40
    2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40

0:00:16.177579539 9446 0xa75c840 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
Error: gst-resource-error-quark: Failed to allocate required memory. (13): gstv4l2src.c(658): gst_v4l2src_decide_allocation (): /GstPipeline:pipeline0/GstBin:source-bin-02/GstV4l2Src:usb-cam-source:
Buffer pool activation failed
Exiting app

  1. Sometimes when I close the window, it prints “Exiting app”, but process hangs. What this could be?

Here is modified code deepstream_test_3_usb.py (15.6 KB)

Thanks

What is your camera features? Can you use ‘v4l2-ctl’ tool to query the formats, resolution and framerate of your camera?

Hi,

Sure, here it is:

$ v4l2-ctl -d /dev/video6 --all
Driver Info (not using libv4l2):
Driver name : uvcvideo
Card type : HD USB Camera: HD USB Camera
Bus info : usb-0000:00:14.0-3.4.1
Driver version: 5.4.78
Capabilities : 0x84A00001
Video Capture
Metadata Capture
Streaming
Extended Pix Format
Device Capabilities
Device Caps : 0x04200001
Video Capture
Streaming
Extended Pix Format
Priority: 2
Video input : 0 (Camera 1: ok)
Format Video Capture:
Width/Height : 1920/1080
Pixel Format : ‘MJPG’
Field : None
Bytes per Line : 0
Size Image : 4147789
Colorspace : Default
Transfer Function : Default (maps to Rec. 709)
YCbCr/HSV Encoding: Default (maps to ITU-R 601)
Quantization : Default (maps to Full Range)
Flags :
Crop Capability Video Capture:
Bounds : Left 0, Top 0, Width 1920, Height 1080
Default : Left 0, Top 0, Width 1920, Height 1080
Pixel Aspect: 1/1
Selection: crop_default, Left 0, Top 0, Width 1920, Height 1080
Selection: crop_bounds, Left 0, Top 0, Width 1920, Height 1080
Streaming Parameters Video Capture:
Capabilities : timeperframe
Frames per second: 10000000.000 (10000000/1)
Read buffers : 0
brightness 0x00980900 (int) : min=-64 max=64 step=1 default=15 value=15
contrast 0x00980901 (int) : min=0 max=64 step=1 default=40 value=40
saturation 0x00980902 (int) : min=0 max=128 step=1 default=64 value=64
hue 0x00980903 (int) : min=-40 max=40 step=1 default=0 value=0
white_balance_temperature_auto 0x0098090c (bool) : default=1 value=1
gamma 0x00980910 (int) : min=72 max=500 step=1 default=100 value=100
gain 0x00980913 (int) : min=0 max=100 step=1 default=0 value=0
power_line_frequency 0x00980918 (menu) : min=0 max=2 default=1 value=1
white_balance_temperature 0x0098091a (int) : min=2800 max=6500 step=1 default=4600 value=4600 flags=inactive
sharpness 0x0098091b (int) : min=0 max=6 step=1 default=3 value=3
backlight_compensation 0x0098091c (int) : min=0 max=2 step=1 default=1 value=1
exposure_auto 0x009a0901 (menu) : min=0 max=3 default=3 value=3
exposure_absolute 0x009a0902 (int) : min=1 max=5000 step=1 default=156 value=156 flags=inactive
exposure_auto_priority 0x009a0903 (bool) : default=0 value=0

Seems it is mjpeg format. Can you just run “v4l2-ctl -d /dev/video6 --list-formats-ext”?

$ v4l2-ctl -d /dev/video6 --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
Index : 0
Type : Video Capture
Pixel Format: ‘MJPG’ (compressed)
Name : Motion-JPEG
Size: Discrete 1920x1080
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.040s (25.000 fps)
Interval: Discrete 0.050s (20.000 fps)
Interval: Discrete 0.067s (15.000 fps)
Interval: Discrete 0.100s (10.000 fps)
Interval: Discrete 0.200s (5.000 fps)
Size: Discrete 2048x1536
Interval: Discrete 0.067s (15.000 fps)
Interval: Discrete 0.100s (10.000 fps)
Interval: Discrete 0.200s (5.000 fps)
Size: Discrete 1600x1200
Interval: Discrete 0.067s (15.000 fps)
Interval: Discrete 0.100s (10.000 fps)
Interval: Discrete 0.200s (5.000 fps)
Size: Discrete 1280x1024
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.040s (25.000 fps)
Interval: Discrete 0.050s (20.000 fps)
Interval: Discrete 0.067s (15.000 fps)
Interval: Discrete 0.100s (10.000 fps)
Interval: Discrete 0.200s (5.000 fps)
Size: Discrete 1280x720
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.040s (25.000 fps)
Interval: Discrete 0.050s (20.000 fps)
Interval: Discrete 0.067s (15.000 fps)
Interval: Discrete 0.100s (10.000 fps)
Interval: Discrete 0.200s (5.000 fps)
Size: Discrete 800x600
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.050s (20.000 fps)
Interval: Discrete 0.067s (15.000 fps)
Interval: Discrete 0.100s (10.000 fps)
Interval: Discrete 0.200s (5.000 fps)
Size: Discrete 640x480
Interval: Discrete 0.033s (30.000 fps)
Interval: Discrete 0.040s (25.000 fps)
Interval: Discrete 0.050s (20.000 fps)
Interval: Discrete 0.067s (15.000 fps)
Interval: Discrete 0.100s (10.000 fps)
Interval: Discrete 0.200s (5.000 fps)

Index       : 1
Type        : Video Capture
Pixel Format: 'YUYV'
Name        : YUYV 4:2:2
	Size: Discrete 1920x1080
		Interval: Discrete 0.200s (5.000 fps)
	Size: Discrete 2048x1536
		Interval: Discrete 0.333s (3.000 fps)
	Size: Discrete 1600x1200
		Interval: Discrete 0.200s (5.000 fps)
	Size: Discrete 1280x1024
		Interval: Discrete 0.200s (5.000 fps)
	Size: Discrete 1280x720
		Interval: Discrete 0.100s (10.000 fps)
		Interval: Discrete 0.200s (5.000 fps)
	Size: Discrete 800x600
		Interval: Discrete 0.050s (20.000 fps)
		Interval: Discrete 0.067s (15.000 fps)
		Interval: Discrete 0.100s (10.000 fps)
		Interval: Discrete 0.200s (5.000 fps)
	Size: Discrete 640x480
		Interval: Discrete 0.033s (30.000 fps)
		Interval: Discrete 0.040s (25.000 fps)
		Interval: Discrete 0.050s (20.000 fps)
		Interval: Discrete 0.067s (15.000 fps)
		Interval: Discrete 0.100s (10.000 fps)
		Interval: Discrete 0.200s (5.000 fps)

Since your camera support multiple formats and resolutions, you need to specify the resolution and format in caps so that the plugin can know which will be used.

caps_v4l2src.set_property(‘caps’, Gst.Caps.from_string(“video/x-raw, framerate=10/1, format=YUY2,width=1280,height=720”))

Or you can try the following pipeline before you modify your code:
gst-launch-1.0 v4l2src device=/dev/video6 ! ‘video/x-raw, framerate=10/1, format=YUY2,width=1280,height=720’ ! nvvideoconvert ! '‘video/x-raw(memory:NVMM), format=NV12’ ! nvegltransform ! nveglglessink

Hi,

This runs fine, but when I change my code with same properties it does no difference and performance is still slow/same.

Just to mention again, python sample deepstream_test_1_usb.py runs fast as expected and since I use exact same properties in my code and same cameras it doesn’t seem to be related.
The pipeline I construct is different, so I would suspect errors come from there.

Would you please take a look at the code enclosed in the previous reply? I don’t expect debugging, but maybe something very obvious is there and could be easily detected rather than from this general

Warning: gst-core-error-quark: A lot of buffers are being dropped. (13): gstbasesink.c(2902): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstEglGlesSink:nvvideo-renderer:
There may be a timestamping problem, or this computer is too slow.

Thank you

I have slightly changed the proposed pipeline by adding muxer and reproduce similar effect

gst-launch-1.0 v4l2src device=/dev/video6 ! “video/x-raw, framerate=10/1, format=YUY2,width=1280,height=720” !
nvvideoconvert ! “video/x-raw(memory:NVMM), format=NV12” ! m.sink_0 nvstreammux name=m width=1280 height=720 batch-size=1 !
nveglglessink
Setting pipeline to PAUSED …
Using winsys: x11
Pipeline is live and does not need PREROLL …
Got context from element ‘eglglessink0’: gst.egl.EGLDisplay=context, display=(GstEGLDisplay)NULL;
Setting pipeline to PLAYING …
New clock: GstSystemClock
WARNING: from element /GstPipeline:pipeline0/GstEglGlesSink:eglglessink0: A lot of buffers are being dropped.
Additional debug info:
gstbasesink.c(2902): gst_base_sink_is_too_late (): /GstPipeline:pipeline0/GstEglGlesSink:eglglessink0:
There may be a timestamping problem, or this computer is too slow.

Ok, I see it was missing live-source. Now after adding it runs fast.

‘gst-launch-1.0 v4l2src device=/dev/video6 ! “video/x-raw, framerate=10/1, format=YUY2,width=1280,height=720” !
nvvideoconvert ! “video/x-raw(memory:NVMM), format=NV12” !
m.sink_0 nvstreammux name=m width=1280 height=720 batch-size=1 live-source=1 batched-push-timeout=4000000 !
nvegltransform ! nveglglessink’

So I have added that live-source property to my python code and for 2 cameras it works good, but there is a major issue yet, it doesn’t let me add more than 2 (I’m planing on processing 8!).

The error:

Creating Pipeline
Creating streamux
Creating source_bin 0
Creating source bin
source-bin-00
Creating Source
Creating source_bin 1
Creating source bin
source-bin-01
Creating Source
Creating source_bin 2
Creating source bin
source-bin-02
Creating Source
Creating Pgie
Creating tiler
Creating nvvidconv
Creating nvosd
Creating transform
Creating EGLSink
WARNING: Overriding infer-config batch-size 1 with number of sources 3
Adding elements to Pipeline
Linking elements in the Pipeline
Now playing…
1 : /dev/video0
2 : /dev/video2
3 : /dev/video4
Starting pipeline
Using winsys: x11
ERROR: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-5.0/sources/deepstream_python_apps/apps/deepstream-test1-usbcam/…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine open error
0:00:02.040254713 19903 0x2bb7a470 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1690> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/deepstream_python_apps/apps/deepstream-test1-usbcam/…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed
0:00:02.040697777 19903 0x2bb7a470 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1797> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.0/sources/deepstream_python_apps/apps/deepstream-test1-usbcam/…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed, try rebuild
0:00:02.040979552 19903 0x2bb7a470 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1715> [UID = 1]: Trying to create engine from model files
INFO: [TRT]: Reading Calibration Cache for calibrator: EntropyCalibration2
INFO: [TRT]: Generated calibration scales using calibration cache. Make sure that calibration cache has latest scales.
INFO: [TRT]: To regenerate calibration cache, please delete the existing one. TensorRT will generate a new calibration cache.
INFO: [TRT]:
INFO: [TRT]: --------------- Layers running on DLA:
INFO: [TRT]:
INFO: [TRT]: --------------- Layers running on GPU:
INFO: [TRT]: conv1 + activation_1/Relu, block_1a_conv_1 + activation_2/Relu, block_1a_conv_2, block_1a_conv_shortcut + add_1 + activation_3/Relu, block_2a_conv_1 + activation_4/Relu, block_2a_conv_2, block_2a_conv_shortcut + add_2 + activation_5/Relu, block_3a_conv_1 + activation_6/Relu, block_3a_conv_2, block_3a_conv_shortcut + add_3 + activation_7/Relu, block_4a_conv_1 + activation_8/Relu, block_4a_conv_2, block_4a_conv_shortcut + add_4 + activation_9/Relu, conv2d_cov, conv2d_cov/Sigmoid, conv2d_bbox,
INFO: [TRT]: Detected 1 inputs and 2 output network tensors.
ERROR: Serialize engine failed because of file path: /opt/nvidia/deepstream/deepstream-5.0/samples/models/Primary_Detector/resnet10.caffemodel_b3_gpu0_int8.engine opened error
0:00:15.189642383 19903 0x2bb7a470 WARN nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1743> [UID = 1]: failed to serialize cude engine to file: /opt/nvidia/deepstream/deepstream-5.0/samples/models/Primary_Detector/resnet10.caffemodel_b3_gpu0_int8.engine
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x368x640
1 OUTPUT kFLOAT conv2d_bbox 16x23x40
2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40
0:00:15.201483331 19903 0x2bb7a470 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
Error: gst-resource-error-quark: Failed to allocate required memory. (13): gstv4l2src.c(658): gst_v4l2src_decide_allocation (): /GstPipeline:pipeline0/GstBin:source-bin-02/GstV4l2Src:usb-cam-source:
Buffer pool activation failed
Exiting app

Are you sure you want to connect 8 USB cameras? There may be bandwidth issue with USB cameras. Please refer to connected more than two usb cameras problem on deepstream-app (Jetson Nano Dev Kit) - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums

Yes, positive, going to run 8 cameras at 2048X1536@10FPS.
Please note, I’m not on ‘Jetson Nano Dev kit’, but on industrial grade box
some of the spec:

Nvidia AGX Xavier
CPU 8-core ARM v8.2 64bit CPU, 8MB L2 + 4MB L3
System Memory 32 GB 256-Bit LPDDR4 x 1 137 GB/s
Storage Device 32GB eMMC
M.2 Key M 2280 x 1 (PCIe[x4])
uSD (microSD) slot x 1
USB type C x 2 for USB 3.2 Gen 1
USB type A x 1 for USB 3.2 Gen 1
USB Type A x 1 for USB 2.0

From what we have checked so far it should be able to operate 8 cameras. Tested with 8 separate, simultaneous Gstreamer pipelines simply capturing and displaying on screen. Physically with 8 cameras connected via 2 usb hubs, each hub with 4 cameras on it connected to a separate usb3 port and it worked.

The information you provided for hardware platform is just 'Jetson", can you specify the type? Unfortunately, it is not just a special issue for Nano but for many other Jetson boards. Three V4L2 USB webcams on Xavier not working - Jetson & Embedded Systems / Jetson AGX Xavier - NVIDIA Developer Forums

1 Like
AI ACCELERATOR Nvidia AGX Xavier
CPU 8-core ARM v8.2 64bit CPU, 8MB L2 + 4MB L3
SYSTEM MEMORY 32 GB 256-Bit LPDDR4 x 1 137 GB/s

I suspect there is something with the pipeline and not hardware limitation, as I have mentioned, it manages to stream simultaneous 8 cameras streams with same resolution/fps.
Anyway, going to test 8 cameras with the deepstream-app (the reference application).
Will update progress here.

Thanks

Hi @Fiona.Chen,
Reference app failing if more than 2 cameras connected.
Camera’s specs again:
Type : Video Capture
Pixel Format: ’ MJPG ’ (compressed)
Name : Motion-JPEG

Size: Discrete 2048x1536
Interval: Discrete 0.067s (15.000 fps)
Interval: Discrete 0.100s ( 10.000 fps )
Interval: Discrete 0.200s (5.000 fps)

Index : 1
Type : Video Capture
Pixel Format: ’ YUYV
Name : YUYV 4:2:2
Size: Discrete 2048x1536
Interval: Discrete 0.333s ( 3.000 fps )

As I’m trying to process 8 cameras at 2048X1536@10FPS I need MJPG, but then I read here, default reference app’s format is YUYV. Before I change the default app to support MJPG I have tried lower FPS, but it didn’t work out, with very similar error to one I got with my python code.

The error:

ERROR from src_elem: Failed to allocate required memory.
Debug info: gstv4l2src.c(658): gst_v4l2src_decide_allocation (): /GstPipeline:pipeline/GstBin:multi_src_bin/GstBin:src_sub_bin0/GstV4l2Src:src_elem:
Buffer pool activation failed
*** INFO: <bus_callback:151>: usb bandwidth might be saturated*
ERROR from src_elem: Internal data stream error.
Debug info: gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline/GstBin:multi_src_bin/GstBin:src_sub_bin0/GstV4l2Src:src_elem:
streaming stopped, reason not-negotiated (-4)
*** INFO: <bus_callback:147>: incorrect camera parameters provided, please provide supported resolution and frame rate*
Quitting

I’m enclosing the config file, hopefully you would take a look.
Thanks
source4_2048_1536_usb_dec_infer_resnet_int8.txt (4.1 KB)

v4l2 driver has reported usb bandwidth error. It is bandwidth issue.