I’m working on a multi-in and multi-out stream processing pipeline with following as my Deepstream pipeline architecture.
|--> nvmsgconvertor --> nvmsgbroker
multiple-sourcebins [uridecodebin --> nvvideoconvertor --> capsfilter(RGBA)] --> nvstreammux --> nvinfer --> nvtracker --> tee --> nvstreamdemux --> multiple-sinkbins[nvosd --> nvvidconvert --> capsfilter --> nvv4l2h264enc --> rtph264pay --> udpsink]
The pipeline throws following error when more than ~5 distinct RTSP streams are given as input.
Cuda failure: status=700
Error: gst-resource-error-quark: Could not get/set settings from/on resource. (13): gstv4l2object.c(3473): gst_v4l2_object_set_format_full (): /GstPipeline:pipeline0/GstBin:sink-bin-00/nvv4l2h264enc:nvvideo-encoder:
Device is in streaming mode
Cuda failure: status=700 in CreateTextureObj at line 2513
nvbufsurftransform.cpp(2369) : getLastCudaError() CUDA error : Recevied NvBufSurfTransformError_Execution_Error : (709) context is destroyed.
Cuda failure: status=46 in CreateTextureObj at line 2513
Cuda failure: status=46
nvbufsurface: Error(-1) in releasing cuda memory
Cuda failure: status=46 in CreateTextureObj at line 2496
nvbufsurftransform.cpp(2369) : getLastCudaError() CUDA error : Recevied NvBufSurfTransformError_Execution_Error : (46) all CUDA-capable devices are busy or unavailable.
nvbufsurftransform.cpp(2369) : getLastCudaError() CUDA error : Recevied NvBufSurfTransformError_Execution_Error : (46) all CUDA-capable devices are busy or unavailable.
Segmentation fault (core dumped)
But the pipeline works just fine with same, or even more, number of input files or a single RTSP URL replicated multiple times.
• Hardware Platform (Jetson / GPU)=Tesla T4
• DeepStream Version=5.0
• JetPack Version (valid for Jetson only)=NA
• TensorRT Version=7.0
• NVIDIA GPU Driver Version (valid for GPU only)=440.100
P.S.:I’m using python bindings for development.