I am facing issue with splitmuxsink in deepstream

I am using gstreamer to stream a video from a using its rtsp stream. In my pipeline everything other element is working fine but when I try to save the recording in form of mp4 the pipeline throws an error “Buffer has no PTS” and the pipeline shutdown. I wan to save an mp4 with duration of 5 mins each.
I have attached the graph which shows the different elements used and their links.

Hey @junshengy is there something I am missing in the pipeline, can you please check and let me know?

This problem may be caused by the pipeline input rtsp stream not containing the rtcp ssrc package.

This may cause the PTS of the GstBuffer output by nvstreamdemux to be set to 0

Is the input of the pipeline an rtsp url and is the attach-sys-ts property of nvstreammux set to false?

If so, use the default value (true) for attach-sys-ts.

Yes, the input is an rtsp url but I have not done any changes to attach-sys-ts property.

Modify your pipeline as below

                                                        | --> .......
                                                        |
..... queue --> nvv4l2h264enc --> h264parse --> tee --> |
                                                        |
                                                        | -->  splitmuxsink

nvv4l2h264enc only needs one instance,

Try the following pipeline

gst-launch-1.0 uridecodebin uri="your rtsp camera" ! m.sink_0 \
               nvstreammux name=m batch-size=1 width=1920 height=1080 ! \
               nvinfer config-file-path=/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_primary.txt ! \
               nvtracker tracker-width=640 tracker-height=480 ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so \
               ll-config-file=/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_tracker_NvDCF_perf.yml ! \
               nvvideoconvert ! 'video/x-raw(memory:NVMM),format=RGBA' ! nvdsosd ! nvvideoconvert ! 'video/x-raw(memory:NVMM),format=NV12' ! \
               nvv4l2h264enc ! h264parse ! tee name=t \
               t.src_0 ! splitmuxsink location="out%02d.mp4" muxer=mp4mux max-size-time=300000000000 \
               t.src_1 ! fakesink

thanks for the suggestion to optimise the script I will make that change.
I tried running the pipeline, but it stopped midway.

modprobe: FATAL: Module nvidia not found in directory /lib/modules/5.10.104-tegra
Setting pipeline to PAUSED ...
Opening in BLOCKING MODE 
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is OFF
[NvMultiObjectTracker] Initialized
WARNING: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.2/samples/configs/deepstream-app/../../models/Primary_Detector/resnet10.caffemodel_b30_gpu0_int8.engine open error
0:00:04.035906196    58 0xaaaaf03cb040 WARN                 nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1897> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.2/samples/configs/deepstream-app/../../models/Primary_Detector/resnet10.caffemodel_b30_gpu0_int8.engine failed
0:00:04.123226923    58 0xaaaaf03cb040 WARN                 nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.2/samples/configs/deepstream-app/../../models/Primary_Detector/resnet10.caffemodel_b30_gpu0_int8.engine failed, try rebuild
0:00:04.123359275    58 0xaaaaf03cb040 INFO                 nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.

Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
• The pipeline being used

If you use file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4, does the pipeline work properly?

Try re-burning DS-7.1 and Jetpack 6.1. On DS-7.1, the above pipeline works fine.

I tried running the pipeline using an mp4, and everything ran fine. The splitmuxsink was able to save an mp4 without any issues.

NVIDIA Jetson Xavier NX - AVerMedia NX215 - Jetpack 5.1

I let the pipeline run for a while and got this error,

modprobe: FATAL: Module nvidia not found in directory /lib/modules/5.10.104-tegra
Setting pipeline to PAUSED ...
Opening in BLOCKING MODE 
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is OFF
[NvMultiObjectTracker] Initialized
WARNING: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.2/samples/configs/deepstream-app/../../models/Primary_Detector/resnet10.caffemodel_b30_gpu0_int8.engine open error
0:00:04.307920256   639 0xaaaaba39e440 WARN                 nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1897> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.2/samples/configs/deepstream-app/../../models/Primary_Detector/resnet10.caffemodel_b30_gpu0_int8.engine failed
0:00:04.384933337   639 0xaaaaba39e440 WARN                 nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.2/samples/configs/deepstream-app/../../models/Primary_Detector/resnet10.caffemodel_b30_gpu0_int8.engine failed, try rebuild
0:00:04.385060474   639 0xaaaaba39e440 INFO                 nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1923> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.
0:05:12.062920950   639 0xaaaaba39e440 INFO                 nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1955> [UID = 1]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-6.2/samples/models/Primary_Detector/resnet10.caffemodel_b30_gpu0_int8.engine successfully
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x368x640       
1   OUTPUT kFLOAT conv2d_bbox     16x23x40        
2   OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40         

0:05:12.269448495   639 0xaaaaba39e440 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<nvinfer0> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_primary.txt sucessfully
Pipeline is live and does not need PREROLL ...
Progress: (open) Opening Stream
Progress: (connect) Connecting to rtsp://admin:drishti123@192.168.1.153x
ERROR: from element /GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0/GstRTSPSrc:source: Could not open resource for reading and writing.
Additional debug info:
gstrtspsrc.c(7893): gst_rtspsrc_retrieve_sdp (): /GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0/GstRTSPSrc:source:
Failed to connect. (Generic error)
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
[NvMultiObjectTracker] De-initialized
Freeing pipeline ...
root@tegra-ubuntu:/home/nvidia/triton-deepstream# ./test.sh 
modprobe: FATAL: Module nvidia not found in directory /lib/modules/5.10.104-tegra
Setting pipeline to PAUSED ...
Opening in BLOCKING MODE 
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
gstnvtracker: Batch processing is ON
gstnvtracker: Past frame output is OFF
[NvMultiObjectTracker] Initialized
0:00:04.443217156   660 0xaaaabd20e440 INFO                 nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1909> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.2/samples/models/Primary_Detector/resnet10.caffemodel_b30_gpu0_int8.engine
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT input_1         3x368x640       
1   OUTPUT kFLOAT conv2d_bbox     16x23x40        
2   OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40         

0:00:04.517458518   660 0xaaaabd20e440 INFO                 nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2012> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.2/samples/models/Primary_Detector/resnet10.caffemodel_b30_gpu0_int8.engine
0:00:04.563308967   660 0xaaaabd20e440 INFO                 nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<nvinfer0> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_primary.txt sucessfully
Pipeline is live and does not need PREROLL ...
Progress: (open) Opening Stream
Progress: (connect) Connecting to rtsp://admin:drishti123@192.168.1.153x
ERROR: from element /GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0/GstRTSPSrc:source: Could not open resource for reading and writing.
Additional debug info:
gstrtspsrc.c(7893): gst_rtspsrc_retrieve_sdp (): /GstPipeline:pipeline0/GstURIDecodeBin:uridecodebin0/GstRTSPSrc:source:
Failed to connect. (Generic error)
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
[NvMultiObjectTracker] De-initialized
Freeing pipeline ...

From the above two points, this problem is related to your rtsp camera. I tested the internal rtsp test source and it works fine.

Please check your rtsp camera. Also, please dump the rtsp file as a ts file. I will try to reproduce it.

gst-launch-1.0 rtspsrc location=“rtsp” protocols=4 ! queue ! parsebin ! queue ! h264parse ! mpegtsmux ! filesink location=rtsp.ts

I tested the script using a different rtsp source from the a different camera and everything worked fine.

But on the other hand I am facing delay when running the script. When we start the pipeline there isn’t any delays but it slowly builds up. I check the pipeline there isn’t any delay between the elements and processes that we perform, the delay is building up at the start itself.

is there a configuration that would help in nvstreammux or any queuing tips.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.