GstPipeline:pipeline0/nvv4l2h264enc:encoder: Maybe be due to not enough memory or failing driver

nvidia-driver:


deepstream-version: 6.4

The error is as follows:

Creating Pipeline 
 
Creating streammux 
 
Creating uridecodebin for [rtmp://127.0.0.1:10935/test/3]
source-bin-00
Creating Pgie 
 
Creating H264 Encoder
Creating H264 rtppay
Linking elements in the Pipeline 

0:00:00.112937172 51867 0x55c0053ffc00 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<encoder:sink> Unable to try format: Unknown error -1
0:00:00.112954868 51867 0x55c0053ffc00 WARN                    v4l2 gstv4l2object.c:2985:gst_v4l2_object_probe_caps_for_format:<encoder:sink> Could not probe minimum capture size for pixelformat YM12
0:00:00.112960005 51867 0x55c0053ffc00 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<encoder:sink> Unable to try format: Unknown error -1
0:00:00.112963669 51867 0x55c0053ffc00 WARN                    v4l2 gstv4l2object.c:2991:gst_v4l2_object_probe_caps_for_format:<encoder:sink> Could not probe maximum capture size for pixelformat YM12
0:00:00.112971627 51867 0x55c0053ffc00 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<encoder:sink> Unable to try format: Unknown error -1
0:00:00.112975346 51867 0x55c0053ffc00 WARN                    v4l2 gstv4l2object.c:2985:gst_v4l2_object_probe_caps_for_format:<encoder:sink> Could not probe minimum capture size for pixelformat Y444
0:00:00.112978397 51867 0x55c0053ffc00 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<encoder:sink> Unable to try format: Unknown error -1
0:00:00.112981989 51867 0x55c0053ffc00 WARN                    v4l2 gstv4l2object.c:2991:gst_v4l2_object_probe_caps_for_format:<encoder:sink> Could not probe maximum capture size for pixelformat Y444
0:00:00.112989492 51867 0x55c0053ffc00 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<encoder:sink> Unable to try format: Unknown error -1
0:00:00.112993275 51867 0x55c0053ffc00 WARN                    v4l2 gstv4l2object.c:2985:gst_v4l2_object_probe_caps_for_format:<encoder:sink> Could not probe minimum capture size for pixelformat P410
0:00:00.112996440 51867 0x55c0053ffc00 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<encoder:sink> Unable to try format: Unknown error -1
0:00:00.112999954 51867 0x55c0053ffc00 WARN                    v4l2 gstv4l2object.c:2991:gst_v4l2_object_probe_caps_for_format:<encoder:sink> Could not probe maximum capture size for pixelformat P410
0:00:00.113009287 51867 0x55c0053ffc00 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<encoder:sink> Unable to try format: Unknown error -1
0:00:00.113012527 51867 0x55c0053ffc00 WARN                    v4l2 gstv4l2object.c:2985:gst_v4l2_object_probe_caps_for_format:<encoder:sink> Could not probe minimum capture size for pixelformat PM10
0:00:00.113015576 51867 0x55c0053ffc00 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<encoder:sink> Unable to try format: Unknown error -1
0:00:00.113019587 51867 0x55c0053ffc00 WARN                    v4l2 gstv4l2object.c:2991:gst_v4l2_object_probe_caps_for_format:<encoder:sink> Could not probe maximum capture size for pixelformat PM10
0:00:00.113043737 51867 0x55c0053ffc00 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<encoder:sink> Unable to try format: Unknown error -1
0:00:00.113049683 51867 0x55c0053ffc00 WARN                    v4l2 gstv4l2object.c:2985:gst_v4l2_object_probe_caps_for_format:<encoder:sink> Could not probe minimum capture size for pixelformat NM12
0:00:00.113053303 51867 0x55c0053ffc00 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<encoder:sink> Unable to try format: Unknown error -1
0:00:00.113057457 51867 0x55c0053ffc00 WARN                    v4l2 gstv4l2object.c:2991:gst_v4l2_object_probe_caps_for_format:<encoder:sink> Could not probe maximum capture size for pixelformat NM12
0:00:00.113097228 51867 0x55c0053ffc00 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<encoder:src> Unable to try format: Unknown error -1
0:00:00.113103375 51867 0x55c0053ffc00 WARN                    v4l2 gstv4l2object.c:2985:gst_v4l2_object_probe_caps_for_format:<encoder:src> Could not probe minimum capture size for pixelformat H264
0:00:00.113109008 51867 0x55c0053ffc00 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<encoder:src> Unable to try format: Unknown error -1
0:00:00.113117385 51867 0x55c0053ffc00 WARN                    v4l2 gstv4l2object.c:2991:gst_v4l2_object_probe_caps_for_format:<encoder:src> Could not probe maximum capture size for pixelformat H264
0:00:06.270525896 51867 0x55c0053ffc00 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.4/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b30_gpu0_int8.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x544x960       
1   OUTPUT kFLOAT output_bbox/BiasAdd 16x34x60        
2   OUTPUT kFLOAT output_cov/Sigmoid 4x34x60         

0:00:06.359784146 51867 0x55c0053ffc00 INFO                 nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary-inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.4/samples/models/Primary_Detector/resnet18_trafficcamnet.etlt_b30_gpu0_int8.engine
0:00:06.366809230 51867 0x55c0053ffc00 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary-inference> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/apps/deepstream-rtsp-in-rtsp-out/dstest1_pgie_config.txt sucessfully
Decodebin child added: source 

Decodebin child added: typefindelement0 

Decodebin child added: decodebin0 

Starting pipeline 

Set_state PLAYING 

Decodebin child added: queue2-0 

Decodebin child added: flvdemux0 

Decodebin child added: multiqueue0 

Decodebin child added: h264parse0 

Decodebin child added: capsfilter0 

Decodebin child added: nvv4l2decoder0 

0:00:06.393327707 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:06.393362541 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:2985:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat MJPG
0:00:06.393378987 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:06.393393188 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:2991:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat MJPG
0:00:06.393429202 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:06.393442811 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:2985:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat AV10
0:00:06.393477859 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:06.393518588 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:2991:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat AV10
0:00:06.393556141 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:06.393589134 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:2985:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat DVX5
0:00:06.393603163 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:06.393617106 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:2991:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat DVX5
0:00:06.393639795 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:06.393654306 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:2985:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat DVX4
0:00:06.393666927 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:06.393680299 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:2991:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat DVX4
0:00:06.393704270 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:06.393719443 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:2985:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat MPG4
0:00:06.393730676 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:06.393749084 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:2991:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat MPG4
0:00:06.393773219 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:06.393787299 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:2985:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat MPG2
0:00:06.393799686 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:06.393812604 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:2991:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat MPG2
0:00:06.393838617 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:06.393853698 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:2985:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat H265
0:00:06.393868880 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:06.393882853 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:2991:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat H265
0:00:06.393905699 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:06.393919482 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:2985:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat VP90
0:00:06.393931855 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:06.393949130 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:2991:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat VP90
0:00:06.393970549 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:06.393984462 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:2985:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat VP80
0:00:06.393996933 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:06.394010647 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:2991:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat VP80
0:00:06.394036134 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:06.394049496 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:2985:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe minimum capture size for pixelformat H264
0:00:06.394063092 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:sink> Unable to try format: Unknown error -1
0:00:06.394075639 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:2991:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:sink> Could not probe maximum capture size for pixelformat H264
0:00:06.394155402 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:src> Unable to try format: Unknown error -1
0:00:06.394170428 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:2985:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:src> Could not probe minimum capture size for pixelformat Y444
0:00:06.394183329 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:src> Unable to try format: Unknown error -1
0:00:06.394197838 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:2991:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:src> Could not probe maximum capture size for pixelformat Y444
0:00:06.394229166 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:src> Unable to try format: Unknown error -1
0:00:06.394285322 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:2985:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:src> Could not probe minimum capture size for pixelformat P410
0:00:06.394300726 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:src> Unable to try format: Unknown error -1
0:00:06.394314771 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:2991:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:src> Could not probe maximum capture size for pixelformat P410
0:00:06.394343221 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:src> Unable to try format: Unknown error -1
0:00:06.394357379 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:2985:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:src> Could not probe minimum capture size for pixelformat PM10
0:00:06.394370000 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:src> Unable to try format: Unknown error -1
0:00:06.394383952 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:2991:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:src> Could not probe maximum capture size for pixelformat PM10
0:00:06.394410127 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:src> Unable to try format: Unknown error -1
0:00:06.394423924 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:2985:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:src> Could not probe minimum capture size for pixelformat NM12
0:00:06.394436554 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:3100:gst_v4l2_object_get_nearest_size:<nvv4l2decoder0:src> Unable to try format: Unknown error -1
0:00:06.394450183 51867 0x7f1cf0016b60 WARN                    v4l2 gstv4l2object.c:2991:gst_v4l2_object_probe_caps_for_format:<nvv4l2decoder0:src> Could not probe maximum capture size for pixelformat NM12
In cb_newpad

gstname= video/x-raw
sink_0
Decodebin linked to pipeline
0:00:06.531737716 51867 0x7f1cf0016b60 WARN          v4l2bufferpool gstv4l2bufferpool.c:1114:gst_v4l2_buffer_pool_start:<encoder:pool:src> Uncertain or not enough buffers, enabling copy threshold
0:00:06.532194671 51867 0x7f1cf0016b60 WARN            v4l2videodec gstv4l2videodec.c:2258:gst_v4l2_video_dec_decide_allocation:<nvv4l2decoder0> Duration invalid, not setting latency
0:00:06.532215381 51867 0x7f1cf0016b60 WARN          v4l2bufferpool gstv4l2bufferpool.c:1114:gst_v4l2_buffer_pool_start:<nvv4l2decoder0:pool:src> Uncertain or not enough buffers, enabling copy threshold
0:00:06.533602774 51867 0x7f1ccc049060 WARN          v4l2bufferpool gstv4l2bufferpool.c:1565:gst_v4l2_buffer_pool_dqbuf:<nvv4l2decoder0:pool:src> Driver should never set v4l2_buffer.field to ANY
time --:2025-01-02 21:39:12.402626
time --:2025-01-02 21:39:12.403074
time --:2025-01-02 21:39:12.403373
time --:2025-01-02 21:39:12.403660

(python3:51867): GStreamer-WARNING **: 21:39:12.454: (../gst/gstinfo.c:556):gst_debug_log_valist: runtime check failed: (object == NULL || G_IS_OBJECT (object))

(python3:51867): GStreamer-WARNING **: 21:39:12.454: (../gst/gstinfo.c:1340):gst_debug_log_default: runtime check failed: (object == NULL || G_IS_OBJECT (object))
0:00:07.351692951 51867 0x55c005249300 ERROR         v4l2bufferpool gstv4l2bufferpool.c:320:gst_v4l2_buffer_pool_copy_buffer:buffer: 0x7f1cec054b40, pts 0:00:00.000000000, dts 99:99:99.999999999, dur 99:99:99.999999999, size 64, offset none, offset_end none, flags 0x0 ERROR in BufSurfacecopy 

0:00:07.351719531 51867 0x55c005249300 ERROR         v4l2bufferpool gstv4l2bufferpool.c:2420:gst_v4l2_buffer_pool_process:<encoder:pool:sink> failed to prepare data
0:00:07.351799644 51867 0x55c005249300 WARN            v4l2videoenc gstv4l2videoenc.c:1756:gst_v4l2_video_enc_handle_frame:<encoder> error: Failed to process frame.
0:00:07.351806244 51867 0x55c005249300 WARN            v4l2videoenc gstv4l2videoenc.c:1756:gst_v4l2_video_enc_handle_frame:<encoder> error: Maybe be due to not enough memory or failing driver
0:00:07.351916611 51867 0x55c005249300 WARN                 nvinfer gstnvinfer.cpp:2406:gst_nvinfer_output_loop:<primary-inference> error: Internal data stream error.
0:00:07.351940718 51867 0x55c005249300 WARN                 nvinfer gstnvinfer.cpp:2406:gst_nvinfer_output_loop:<primary-inference> error: streaming stopped, reason error (-5)
Error: gst-resource-error-quark: Failed to process frame. (1): gstv4l2videoenc.c(1756): gst_v4l2_video_enc_handle_frame (): /GstPipeline:pipeline0/nvv4l2h264enc:encoder:
Maybe be due to not enough memory or failing driver
err result= []
Error: gst-stream-error-quark: Internal data stream error. (1): gstnvinfer.cpp(2406): gst_nvinfer_output_loop (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
streaming stopped, reason error (-5)
err result= []
time --:2025-01-02 21:39:12.483294
time --:2025-01-02 21:39:12.483631
time --:2025-01-02 21:39:12.483911
time --:2025-01-02 21:39:12.484218

(python3:51867): GStreamer-WARNING **: 21:39:12.487: (../gst/gstinfo.c:556):gst_debug_log_valist: runtime check failed: (object == NULL || G_IS_OBJECT (object))

(python3:51867): GStreamer-WARNING **: 21:39:12.487: (../gst/gstinfo.c:1340):gst_debug_log_default: runtime check failed: (object == NULL || G_IS_OBJECT (object))
0:00:07.384683471 51867 0x55c005249300 ERROR         v4l2bufferpool gstv4l2bufferpool.c:320:gst_v4l2_buffer_pool_copy_buffer:buffer: 0x7f1cec054c60, pts 0:00:00.040000000, dts 99:99:99.999999999, dur 99:99:99.999999999, size 64, offset none, offset_end none, flags 0x0 ERROR in BufSurfacecopy 

0:00:07.384698444 51867 0x55c005249300 ERROR         v4l2bufferpool gstv4l2bufferpool.c:2420:gst_v4l2_buffer_pool_process:<encoder:pool:sink> failed to prepare data
0:00:07.384708085 51867 0x55c005249300 WARN            v4l2videoenc gstv4l2videoenc.c:1756:gst_v4l2_video_enc_handle_frame:<encoder> error: Failed to process frame.
0:00:07.384714162 51867 0x55c005249300 WARN            v4l2videoenc gstv4l2videoenc.c:1756:gst_v4l2_video_enc_handle_frame:<encoder> error: Maybe be due to not enough memory or failing driver
Error: gst-resource-error-quark: Failed to process frame. (1): gstv4l2videoenc.c(1756): gst_v4l2_video_enc_handle_frame (): /GstPipeline:pipeline0/nvv4l2h264enc:encoder:
Maybe be due to not enough memory or failing driver
err result= []
time --:2025-01-02 21:39:12.514621
0:00:07.417395620 51867 0x55c0240236a0 WARN          v4l2bufferpool gstv4l2bufferpool.c:1565:gst_v4l2_buffer_pool_dqbuf:<encoder:pool:src> Driver should never set v4l2_buffer.field to ANY
0:00:07.447890373 51867 0x55c0052491e0 WARN                 basesrc gstbasesrc.c:3127:gst_base_src_loop:<source> error: Internal data stream error.
0:00:07.447924138 51867 0x55c0052491e0 WARN                 basesrc gstbasesrc.c:3127:gst_base_src_loop:<source> error: streaming stopped, reason error (-5)
Error: gst-stream-error-quark: Internal data stream error. (1): ../libs/gst/base/gstbasesrc.c(3127): gst_base_src_loop (): /GstPipeline:pipeline0/GstURIDecodeBin:source-bin-00/GstRtmp2Src:source:
streaming stopped, reason error (-5)
err result= ['source-bin-00']

The video stream is normal:rtmp://127.0.0.1:10935/test/3
I am running the example normally:

python3 deepstream_test1_rtsp_in_rtsp_out.py -i rtmp://127.0.0.1:10935/test/3

I wrote a demo based on this example, but it encountered the error mentioned above


#coding=utf-8

################################################################################
# The MIT License
#
# Copyright (c) 2019-2023, Prominence AI, Inc.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
################################################################################

#!/usr/bin/env python



import gi
gi.require_version('Gst', '1.0')
from gi.repository import Gst, GLib, GstRtspServer
from gi.repository import GLib
import pyds

import sys, datetime, re
#sys.path.insert(0, "../../")
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '0'

os.environ["GST_DEBUG_DUMP_DOT_DIR"] = "/tmp"
os.putenv('GST_DEBUG_DUMP_DIR_DIR', '/tmp')



class DSLPipline():

    def __init__(self, gpu_id,
                 drop_frame_interval, frame_skip,  video_url,
                 inferConfigFile
                 ):
        self.gpu_id = gpu_id
        self.drop_frame_interval = drop_frame_interval
        self.frame_skip = frame_skip
        self.video_url = video_url
        self.inferConfigFile = inferConfigFile

        self.pipeline = None
        self.streammux = None


    def _tiler_sink_pad_buffer_probe(self, pad, info, u_data):
        gst_buffer = info.get_buffer()
        if not gst_buffer:
            print("Unable to get GstBuffer ")
            return


        batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer))
        l_frame = batch_meta.frame_meta_list
        while l_frame is not None:
            try:
                frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data)
            except StopIteration:
                break

            source_id = frame_meta.source_id
            l_obj = frame_meta.obj_meta_list

            while l_obj is not None:
                try:
                    # Casting l_obj.data to pyds.NvDsObjectMeta
                    obj_meta = pyds.glist_get_nvds_object_meta(l_obj.data)
                except StopIteration:
                    break

                try:
                    l_obj = l_obj.next
                except StopIteration:
                    break



            display_meta = pyds.nvds_acquire_display_meta_from_pool(batch_meta)
            display_meta.num_labels = 1
            py_nvosd_text_params = display_meta.text_params[0]
            py_nvosd_text_params.display_text = "time --:{}".format(datetime.datetime.now())

            # Now set the offsets where the string should appear
            py_nvosd_text_params.x_offset = 10
            py_nvosd_text_params.y_offset = 12

            # Font , font-color and font-size
            py_nvosd_text_params.font_params.font_name = "Serif"
            py_nvosd_text_params.font_params.font_size = 10
            # set(red, green, blue, alpha); set to White
            py_nvosd_text_params.font_params.font_color.set(1.0, 1.0, 1.0, 1.0)

            # Text background color
            py_nvosd_text_params.set_bg_clr = 1
            # set(red, green, blue, alpha); set to Black
            py_nvosd_text_params.text_bg_clr.set(0.0, 0.0, 0.0, 1.0)
            # Using pyds.get_string() to get display_text as string
            print(pyds.get_string(py_nvosd_text_params.display_text))
            pyds.nvds_add_display_meta_to_frame(frame_meta, display_meta)

            try:
                l_frame = l_frame.next
            except StopIteration:
                break

        return Gst.PadProbeReturn.OK

    def _bus_call(self, bus, message, loop):
        t = message.type
        if t == Gst.MessageType.EOS:
            sys.stdout.write("End-of-stream\n")
            #loop.quit()
        elif t == Gst.MessageType.WARNING:
            err, debug = message.parse_warning()
            sys.stderr.write("Warning: %s: %s\n" % (err, debug))
            warning = "Warning: %s: %s\n" % (err, debug)
            result = re.findall(r"source-bin-\d+", warning)
            print("warning result=", result)


        elif t == Gst.MessageType.ERROR:
            err, debug = message.parse_error()
            sys.stderr.write("Error: %s: %s\n" % (err, debug))
            err_str = "Error: %s: %s\n" % (err, debug)
            result = re.findall(r"source-bin-\d+", err_str)
            print("err result=", result)

        elif t == Gst.MessageType.ELEMENT:
            struct = message.get_structure()
            # Check for stream-eos message
            if struct is not None and struct.has_name("stream-eos"):
                parsed, stream_id = struct.get_uint("stream-id")
                if parsed:
                    # Set eos status of stream to True, to be deleted in delete-sources
                    print("Got EOS from stream %d" % stream_id)

        return True


    def _create_uridecode_bin(self, source_id, url):
        def decodebin_child_added(child_proxy, Object, name, user_data):
            print("Decodebin child added:", name, "\n")
            if (name.find("decodebin") != -1):
                Object.connect("child-added", decodebin_child_added, user_data)
            if (name.find("nvv4l2decoder") != -1):
                Object.set_property("gpu_id", 0)
                Object.set_property("drop-frame-interval", self.drop_frame_interval)

        def cb_newpad(decodebin, pad, data):
            print("In cb_newpad\n")
            caps = pad.get_current_caps()
            gststruct = caps.get_structure(0)
            gstname = gststruct.get_name()

            # Need to check if the pad created by the decodebin is for video and not
            # audio.
            print("gstname=", gstname)
            if (gstname.find("video") != -1):
                source_id = data
                pad_name = "sink_%u" % source_id
                print(pad_name)
                # Get a sink pad from the streammux, link to decodebin
                sinkpad = self.streammux.request_pad_simple(pad_name)
                if not sinkpad:
                    print("Unable to create sink pad bin \n")
                if pad.link(sinkpad) == Gst.PadLinkReturn.OK:
                    print("Decodebin linked to pipeline")
                else:
                    print("Failed to link decodebin to pipeline\n")

        print("Creating uridecodebin for [%s]" % url)
        # Create a source GstBin to abstract this bin's content from the rest of the
        # pipeline
        bin_name = "source-bin-%02d" % source_id
        print(bin_name)
        # Source element for reading from the uri.
        # We will use decodebin and let it figure out the container format of the
        # stream and the codec and plug the appropriate demux and decode plugins.
        bin = Gst.ElementFactory.make("uridecodebin", bin_name)
        if not bin:
            print(" Unable to create uri decode bin \n")
        # We set the input uri to the source element
        bin.set_property("uri", url)
        # Connect to the "pad-added" signal of the decodebin which generates a
        # callback once a new pad for raw data has been created by the decodebin
        bin.connect("pad-added", cb_newpad, source_id)
        bin.connect("child-added", decodebin_child_added, source_id)
        return bin


    def run(self):
        # Standard GStreamer initialization
        Gst.init(None)

        # Create gstreamer elements */
        # Create Pipeline element that will form a connection of other elements
        print("Creating Pipeline \n ")
        self.pipeline = Gst.Pipeline()
        is_live = False
        if not self.pipeline:
            print(" Unable to create Pipeline \n")
            return

        print("Creating streammux \n ")
        # Create nvstreammux instance to form batches from one or more sources.

        self.streammux = Gst.ElementFactory.make("nvstreammux", "Stream-muxer")
        self.streammux.set_property("batched-push-timeout", 25000)
        self.streammux.set_property("batch-size", 30)
        self.streammux.set_property("gpu_id", 0)
        self.streammux.set_property('width', 1280)
        self.streammux.set_property('height', 720)


        if not self.streammux:
            print(" Unable to create NvStreamMux \n")
            return
        if self.video_url.find("rtsp://") == 0:
            is_live = True
            print("Atleast one of the sources is live")
            self.streammux.set_property('live-source', 1)

        # Create first source bin and add to pipeline
        source_bin = self._create_uridecode_bin(0, self.video_url)
        if not source_bin:
            print("Failed to create source bin. Exiting. \n")
            return
        self.pipeline.add(source_bin)

        print("Creating Pgie \n ")
        # pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
        pgie = Gst.ElementFactory.make("nvinfer", "primary-inference")
        if not pgie:
            print(" Unable to create pgie \n")

        # Set pgie, sgie1, and sgie2 configuration file paths
        pgie.set_property('config-file-path', self.inferConfigFile)
        # Set necessary properties of the nvinfer element, the necessary ones are:
        pgie.set_property("batch-size", 12)
        # Set gpu IDs of the inference engines
        pgie.set_property("gpu_id", 0)


        codec = "H264"
        enc_type = 0
        bitrate = 4000000

        # Use convertor to convert from NV12 to RGBA as required by nvosd
        nvvidconv = Gst.ElementFactory.make("nvvideoconvert", "convertor")
        if not nvvidconv:
            sys.stderr.write(" Unable to create nvvidconv \n")

        # Create OSD to draw on the converted RGBA buffer
        nvosd = Gst.ElementFactory.make("nvdsosd", "onscreendisplay")
        if not nvosd:
            sys.stderr.write(" Unable to create nvosd \n")
        nvvidconv_postosd = Gst.ElementFactory.make("nvvideoconvert", "convertor_postosd")
        if not nvvidconv_postosd:
            sys.stderr.write(" Unable to create nvvidconv_postosd \n")

        # Create a caps filter
        caps = Gst.ElementFactory.make("capsfilter", "filter")
        if enc_type == 0:
            caps.set_property("caps", Gst.Caps.from_string("video/x-raw(memory:NVMM), format=I420"))
        else:
            caps.set_property("caps", Gst.Caps.from_string("video/x-raw, format=I420"))

        # Make the encoder
        if codec == "H264":
            if enc_type == 0:
                encoder = Gst.ElementFactory.make("nvv4l2h264enc", "encoder")
            else:
                encoder = Gst.ElementFactory.make("x264enc", "encoder")
            print("Creating H264 Encoder")
        elif codec == "H265":
            if enc_type == 0:
                encoder = Gst.ElementFactory.make("nvv4l2h265enc", "encoder")
            else:
                encoder = Gst.ElementFactory.make("x265enc", "encoder")
            print("Creating H265 Encoder")
        if not encoder:
            sys.stderr.write(" Unable to create encoder")
        encoder.set_property('bitrate', bitrate)


        # Make the payload-encode video into RTP packets
        if codec == "H264":
            rtppay = Gst.ElementFactory.make("rtph264pay", "rtppay")
            print("Creating H264 rtppay")
        elif codec == "H265":
            rtppay = Gst.ElementFactory.make("rtph265pay", "rtppay")
            print("Creating H265 rtppay")
        if not rtppay:
            sys.stderr.write(" Unable to create rtppay")

        # Make the UDP sink
        updsink_port_num = 5400
        sink = Gst.ElementFactory.make("udpsink", "udpsink")
        if not sink:
            sys.stderr.write(" Unable to create udpsink")

        sink.set_property('host', '224.224.255.255')
        sink.set_property('port', updsink_port_num)
        sink.set_property('async', False)
        sink.set_property('sync', 1)

        self.pipeline.add(self.streammux)
        self.pipeline.add(pgie)
        self.pipeline.add(nvvidconv)
        self.pipeline.add(nvosd)
        self.pipeline.add(nvvidconv_postosd)
        self.pipeline.add(caps)
        self.pipeline.add(encoder)
        self.pipeline.add(rtppay)
        self.pipeline.add(sink)

        # We link elements in the following order:
        # sourcebin -> streammux -> nvinfer -> nvtracker -> nvdsanalytics ->
        # nvtiler -> nvvideoconvert -> nvdsosd -> sink

        print("Linking elements in the Pipeline \n")
        self.streammux.link(pgie)
        pgie.link(nvvidconv)
        nvvidconv.link(nvosd)
        nvosd.link(nvvidconv_postosd)
        nvvidconv_postosd.link(caps)
        caps.link(encoder)
        encoder.link(rtppay)
        rtppay.link(sink)

        # create an event loop and feed gstreamer bus mesages to it
        loop = GLib.MainLoop()
        bus = self.pipeline.get_bus()
        bus.add_signal_watch()
        bus.connect("message", self._bus_call, loop)

        self.pipeline.set_state(Gst.State.PAUSED)


        # Start streaming
        rtsp_port_num = 8554

        server = GstRtspServer.RTSPServer.new()
        server.props.service = "%d" % rtsp_port_num
        server.attach(None)

        factory = GstRtspServer.RTSPMediaFactory.new()
        factory.set_launch(
            "( udpsrc name=pay0 port=%d buffer-size=524288 caps=\"application/x-rtp, media=video, clock-rate=90000, encoding-name=(string)%s, payload=96 \" )" % (
            updsink_port_num, "H264"))
        factory.set_shared(True)
        server.get_mount_points().add_factory("/ds-test", factory)


        # had got all the metadata.
        osdsinkpad = nvosd.get_static_pad("sink")
        if not osdsinkpad:
            sys.stderr.write(" Unable to get sink pad of nvosd \n")
        #osdsinkpad.add_probe(Gst.PadProbeType.BUFFER, self._save_frame, 0)
        osdsinkpad.add_probe(Gst.PadProbeType.BUFFER, self._tiler_sink_pad_buffer_probe, 0)
        Gst.debug_bin_to_dot_file(self.pipeline, Gst.DebugGraphDetails.ALL, "pipeline")

        print("Starting pipeline \n")
        # start play back and listed to events
        self.pipeline.set_state(Gst.State.PLAYING)
        print("Set_state PLAYING \n")
        try:
            loop.run()
        except:
            import traceback
            print(traceback.format_exc())
        # cleanup
        print("Exiting app\n")
        self.pipeline.set_state(Gst.State.NULL)



if __name__ == '__main__':
    p = DSLPipline(
        gpu_id=0,
        drop_frame_interval=4,
        frame_skip=8,
        video_url="rtmp://127.0.0.1:10935/test/3",
        inferConfigFile='/opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/apps/deepstream-rtsp-in-rtsp-out/dstest1_pgie_config.txt',

    )
    p.run()


The flowchart of the example is as follows:
pipeline:

Can you give me some tips? Thank you very much

Please add h264parse before rtph264pay.

Is that so? There are still the following errors

0:00:10.501787216 62652 0x55bba341a180 ERROR         v4l2bufferpool gstv4l2bufferpool.c:320:gst_v4l2_buffer_pool_copy_buffer:buffer: 0x7f6908109120, pts 0:00:00.000000000, dts 99:99:99.999999999, dur 99:99:99.999999999, size 64, offset none, offset_end none, flags 0x0 ERROR in BufSurfacecopy 

0:00:10.501798516 62652 0x55bba341a180 ERROR         v4l2bufferpool gstv4l2bufferpool.c:2420:gst_v4l2_buffer_pool_process:<encoder:pool:sink> failed to prepare data
0:00:10.501806095 62652 0x55bba341a180 WARN            v4l2videoenc gstv4l2videoenc.c:1756:gst_v4l2_video_enc_handle_frame:<encoder> error: Failed to process frame.
0:00:10.501810108 62652 0x55bba341a180 WARN            v4l2videoenc gstv4l2videoenc.c:1756:gst_v4l2_video_enc_handle_frame:<encoder> error: Maybe be due to not enough memory or failing driver
0:00:10.501909733 62652 0x55bba341a180 WARN                 nvinfer gstnvinfer.cpp:2406:gst_nvinfer_output_loop:<primary-inference> error: Internal data stream error.
0:00:10.501916692 62652 0x55bba341a180 WARN                 nvinfer gstnvinfer.cpp:2406:gst_nvinfer_output_loop:<primary-inference> error: streaming stopped, reason error (-5)
Error: gst-resource-error-quark: Failed to process frame. (1): gstv4l2videoenc.c(1756): gst_v4l2_video_enc_handle_frame (): /GstPipeline:pipeline0/nvv4l2h264enc:encoder:
Maybe be due to not enough memory or failing driver
err result= []
Error: gst-stream-error-quark: Internal data stream error. (1): gstnvinfer.cpp(2406): gst_nvinfer_output_loop (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
streaming stopped, reason error (-5)
err result= []
time --:2025-01-03 11:27:33.414222
time --:2025-01-03 11:27:33.414406
time --:2025-01-03 11:27:33.414546
time --:2025-01-03 11:27:33.414706

(python3:62652): GStreamer-WARNING **: 11:27:33.416: (../gst/gstinfo.c:556):gst_debug_log_valist: runtime check failed: (object == NULL || G_IS_OBJECT (object))

(python3:62652): GStreamer-WARNING **: 11:27:33.416: (../gst/gstinfo.c:1340):gst_debug_log_default: runtime check failed: (object == NULL || G_IS_OBJECT (object))
0:00:10.531852159 62652 0x55bba341a180 ERROR         v4l2bufferpool gstv4l2bufferpool.c:320:gst_v4l2_buffer_pool_copy_buffer:buffer: 0x7f6908109240, pts 0:00:00.040000000, dts 99:99:99.999999999, dur 99:99:99.999999999, size 64, offset none, offset_end none, flags 0x0 ERROR in BufSurfacecopy 

0:00:10.531866598 62652 0x55bba341a180 ERROR         v4l2bufferpool gstv4l2bufferpool.c:2420:gst_v4l2_buffer_pool_process:<encoder:pool:sink> failed to prepare data

I can run your script in my machine after adding the rtmp live checking.

        if self.video_url.find("rtmp://") == 0:
            is_live = True
            print("Atleast one of the sources is live")
            self.streammux.set_property('live-source', 1)

Are you working in docker container or directly in host machine?

Thank you so mach

However, after running for a period of time, the pipeline will interrupt as follows(for 50fps stream, The 25fps stream did not experience such an issue):

time --:2025-01-03 17:41:12.887716
time --:2025-01-03 17:41:12.913018
time --:2025-01-03 17:41:12.938359
time --:2025-01-03 17:41:12.963825
time --:2025-01-03 17:41:12.989261
time --:2025-01-03 17:41:13.014670
time --:2025-01-03 17:41:13.040169
time --:2025-01-03 17:41:13.065551
nvstreammux: Successfully handled EOS for source_id=0
time --:2025-01-03 17:41:13.069429
time --:2025-01-03 17:41:13.071675
time --:2025-01-03 17:41:13.074119
0:02:48.514158490 16127 0x55fc346ffaa0 WARN           v4l2allocator gstv4l2allocator.c:1511:gst_v4l2_allocator_dqbuf:<encoder:pool:src:allocator> V4L2 provided buffer has bytesused 0 which is too small to include data_offset 0
Got EOS from stream 0
End-of-stream
0:02:59.530770487 16127 0x7fbb165c2300 WARN               rtspmedia rtsp-media.c:4935:gst_rtsp_media_set_state: media 0x7fbb343709b0 was not prepared

Is there any example of dynamic addition or deletion? I referred to it:
deepstream_python_apps/apps/runtime_source_add_delete/deepstream_rt_src_add_del.py at cb7fd9c8aa012178527e0cb84f91d1f5a0ad37ff · NVIDIA-AI-IOT/deepstream_python_apps
However, after successful addition, the output RTSP did not change

Can I control which stream to display in the case of multiple streams? Do you have similar information?

Please raise new topic for new issue.