Deepstream x264enc + rtmpsink

  1. Use x264enc to encode the video stream and then push the pipeline through rtmp to get stuck in the ready state without reporting an error. Just replacing x264enc with nvv4l2h264enc pipeline works fine. But in our project, we found that the rtmp stream pushed by nvv4l2h264enc cannot be played in SRS, so I want to use x264enc to encode the stream(It has been verified with simple pipeline that streams pushed using x264enc can be played in SRS, pipe:gst-launch-1.0 -v v4l2src device=/dev/video0 ! videoconvert ! video/x-raw,format=I420 ! x264enc ! h264parse ! flvmux ! rtmpsink location=“rtmp://192.168.1.120/test/hongyan9”).

  2. Here is my pipe graph which i using the x264enc element and stuck in the ready state .
    multi_src_bin->queue->nvvideoconvert->nvinfer->nvstreamDemux->queue->nvvideoconvert->queue->nvdsosd->queue->tee->queue->nvvideoconvert->video/x-raw, format=I420->x264enc->h264parse->flvmux->rtmpsink.

  3. Here is my pipe graph which i using the nvv4l2h264enc element and looks works fine .
    multi_src_bin->queue->nvvideoconvert->nvinfer->nvstreamDemux->queue->nvvideoconvert->queue->nvdsosd->queue->tee->queue->nvvideoconvert->video/x-raw, format=I420->nvv4l2h264enc->h264parse->flvmux->rtmpsink.

  4. The program is modified from deepstream-app, here is my config file :
    config_hongyan_pipeline.txt (4.9 KB)

when I use x265enc:
** INFO: <gstBusScrutiny:186: DeepStreamApp with [id:0 src:rtmp://192.168.1.120/UAV04001401264/UAV04001401264_2023_08_10_08_55_4444 dst:rtmp://192.168.1.120/UAV04001401264/UAV04001401264_2023_08_10_08_55_4444AI] Pipeline ready

gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is ON
[NvMultiObjectTracker] Initialized
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See CUDA_MODULE_LOADING in CUDA C++ Programming Guide
WARNING: [TRT]: CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See CUDA_MODULE_LOADING in CUDA C++ Programming Guide
0:00:01.476606577 406774 0x7f7b7c00fc70 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.2/samples/models/Primary_Detector/resnet10.caffemodel_b30_gpu0_int8.engine
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x368x640
1 OUTPUT kFLOAT conv2d_bbox 16x23x40
2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40

0:00:01.519927492 406774 0x7f7b7c00fc70 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.2/samples/models/Primary_Detector/resnet10.caffemodel_b30_gpu0_int8.engine
0:00:01.522756566 406774 0x7f7b7c00fc70 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-6.2/workspace/Deepstream-hongyan-app/config/config_infer_primary.txt sucessfully

**PERF: FPS 0 (Avg)
PERF(0): 0.00 (0.00)
.PERF(0): 0.00 (0.00)
.PERF(0): 0.00 (0.00)
.PERF(0): 0.00 (0.00)

GPU RTX 4080
Deepstream 6.2
CUDA Driver Version: 12.0
CUDA Runtime Version: 11.8
TensorRT Version: 8.5
cuDNN Version: 8.7
libNVWarp360 Version: 2.0.1d3

1 Like

Could you attach the patch that added in the open soure code?

Of course , I added an new func named create_rtmpsink_bin()

deepstream_sink_bin.c (35.0 KB)

Hi yuweiw!
Sorry to bother you, I don’t know if you have some views or opinions on this issue, if so, please let me know.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

The patch looks fine. About the issue below, you can try to use the latest DeepStream version 6.3 and set the IDR related parameters: force-IDR idrinterval.

About the issue of x264enc, We also have similar usage in our demo, create_encode_file_bin in the sources\apps\apps-common\src\deepstream_sink_bin.c. Could you try to reproduce that with our demo?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.