How can I use gstreamer and MIPI Cam with deepstream from gst-launch

• Hardware Platform (Jetson AGX)
• DeepStream Version: 5.0
• JetPack Version: 4.5.1

Hi, I want to ask how to connect mipi cam to deepstream by gst-launch-1.0

I try deepstream-app -c configs/deepstream-app/source1_usb_dec_infer_resnet_int8.txt and works OK.

And I use

gst-launch-1.0 v4l2src device=/dev/video0 ! \
video/x-raw,format=UYVY,width=1280,height=720,framerate=60/1 ! \
videoconvert !\ 
nvvidconv ! \
'video/x-raw(memory:NVMM),format=NV12' ! \
nvvidconv ! \
m.sink_0 nvstreammux name=m batch-size=1 width=1280 height=720 ! \
nvinfer config-file-path=configs/deepstream-app/config_infer_primary.txt \
nvvideoconvert ! \
nvdsosd ! \
nvegltransform ! \
nveglglessink sync=false

and shows

Setting pipeline to PAUSED ...

Using winsys: x11 

WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
0:00:03.377703738 27536   0x5593068c70 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1702> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x368x640       
1   OUTPUT kFLOAT conv2d_bbox     16x23x40        
2   OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40         

0:00:03.378024891 27536   0x5593068c70 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:1643> [UID = 1]: Backend has maxBatchSize 1 whereas 30 has been requested
0:00:03.378067739 27536   0x5593068c70 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1814> [UID = 1]: deserialized backend context :/opt/nvidia/deepstream/deepstream-5.1/samples/models/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine failed to match config params, trying rebuild
0:00:03.381758411 27536   0x5593068c70 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 1]: Trying to create engine from model files
INFO: [TRT]: Reading Calibration Cache for calibrator: EntropyCalibration2
INFO: [TRT]: Generated calibration scales using calibration cache. Make sure that calibration cache has latest scales.
INFO: [TRT]: To regenerate calibration cache, please delete the existing one. TensorRT will generate a new calibration cache.
INFO: [TRT]: 
INFO: [TRT]: --------------- Layers running on DLA: 
INFO: [TRT]: 
INFO: [TRT]: --------------- Layers running on GPU: 
INFO: [TRT]: conv1 + activation_1/Relu, block_1a_conv_1 + activation_2/Relu, block_1a_conv_2, block_1a_conv_shortcut + add_1 + activation_3/Relu, block_2a_conv_1 + activation_4/Relu, block_2a_conv_2, block_2a_conv_shortcut + add_2 + activation_5/Relu, block_3a_conv_1 + activation_6/Relu, block_3a_conv_2, block_3a_conv_shortcut + add_3 + activation_7/Relu, block_4a_conv_1 + activation_8/Relu, block_4a_conv_2, block_4a_conv_shortcut + add_4 + activation_9/Relu, conv2d_cov, conv2d_cov/Sigmoid, conv2d_bbox, 
INFO: [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: [TRT]: Detected 1 inputs and 2 output network tensors.
ERROR: Serialize engine failed because of file path: /opt/nvidia/deepstream/deepstream-5.1/samples/models/Primary_Detector/resnet10.caffemodel_b30_gpu0_int8.engine opened error
0:00:45.126769760 27536   0x5593068c70 WARN                 nvinfer gstnvinfer.cpp:616:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1744> [UID = 1]: failed to serialize cude engine to file: /opt/nvidia/deepstream/deepstream-5.1/samples/models/Primary_Detector/resnet10.caffemodel_b30_gpu0_int8.engine
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x368x640       
1   OUTPUT kFLOAT conv2d_bbox     16x23x40        
2   OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40         

0:00:45.157467626 27536   0x5593068c70 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<nvinfer0> [UID 1]: Load new model:configs/deepstream-app/config_infer_primary.txt sucessfully
Pipeline is live and does not need PREROLL ...
Got context from element 'eglglessink0': gst.egl.EGLDisplay=context, display=(GstEGLDisplay)NULL;
Setting pipeline to PLAYING ...
New clock: GstSystemClock
ERROR: from element /GstPipeline:pipeline0/GstNvStreamMux:m: Input buffer number of surfaces (0) must be equal to mux->num_surfaces_per_frame (1)
	Set nvstreammux property num-surfaces-per-frame appropriately

Additional debug info:
/dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvmultistream/gstnvstreammux.c(364): gst_nvstreammux_chain (): /GstPipeline:pipeline0/GstNvStreamMux:m
Execution ended after 0:00:00.796363504
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...

How can I fix it?

nvvidconv is not a deepstream plugin. DeepStream SDK FAQ - #15 by Fiona.Chen

1 Like

@Fiona.Chen thanks,

gst-launch-1.0 nvv4l2camerasrc device=/dev/video0 bufapi-version=1 !  \
'video/x-raw(memory:NVMM),width=1920,height=1080,framerate=60/1' ! \
nvvideoconvert ! \
'video/x-raw(memory:NVMM),format=NV12' ! \
m.sink_0 nvstreammux width=1920 height=1080 batch-size=1 name=m ! \
nvinfer config-file-path=/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/config_infer_primary.txt batch-size=1 ! \
nvvideoconvert ! nvdsosd ! nvegltransform ! nveglglessink sync=0

That works!

HI @Fiona.Chen, can I ask about how to use OpenCV and appsink to get image?

I use that before

std::string cmd = "v4l2src io-mode=4 device=/dev/video" + std::to_string(i) + " do-timestamp=true ! video/x-raw, width=1920, height=1080, framerate=60/1, format=UYVY ! queue max-size-buffers=1 ! appsink sync=false";

cv::VideoCapture *cap = new cv::VideoCapture(cmd.c_str(), cv::CAP_GSTREAMER);

cap->read(image)

Is that possible to use deepstream with OpenCV cap? I try that it will failed

gst-launch-1.0 nvv4l2camerasrc device=/dev/video0 bufapi-version=1 ! \
'video/x-raw(memory:NVMM),width=1920,height=1080,framerate=60/1' ! \
nvvideoconvert ! \
'video/x-raw(memory:NVMM),format=NV12' ! \
m.sink_0 nvstreammux width=1920 height=1080 batch-size=1 name=m ! \
nvinfer config-file-path=config_infer_primary.txt batch-size=1 ! \
nvvideoconvert ! \
nvdsosd ! \
nvegltransform ! \
appsink sync=false

Do you want to get the frame with OSD? You need to remove nvegltransform and use nvvideoconvert to convert the HW buffer to SW buffer. OpenCV can not handle HW buffer.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.