Artifacts when batching streams with different frame rates

• Hardware Platform: dGPU (L4)
• DeepStream Version: 7.1(nvcr.io/nvidia/deepstream:7.1-samples-multiarch)
• TensorRT Version: 10.3.0.26-1+cuda12.5
• NVIDIA GPU Driver Version: 570.133.20 (CUDA 12.8)
• Issue Type: bugs

Hi,

We are facing an issue where streams decoded by nvurisrcbin and batched with the new nvstreammux have big decoding artifacts (typically squares around moving objects).

The issue appears whenever the input RTSP streams have different FPS.

For instance 25fps & 25fps works, 30fps & 30fps works, but 30fps & 25fps has decoding issues.

We know it’s not a network issue or decoder limit since as long as the fps are identical we managed to have many more streams in the same pipeline with no artifacts whatsoever.

This happens with both deepstream 7.0 and 7.1

A minimal pipeline to see this issue is to simply decode the stream, and output the frames as png:

  1. This produces png files with artifacts
    (note that not all frames are broken, make sure to have object moving for a few seconds)
COMMAND=(

    docker run
    --rm
    --interactive
    --runtime       nvidia
    --env           USE_NEW_NVSTREAMMUX=yes
    --volume        "${OUTPUT_DIR}:/output"
    --name          "${CONTAINER}"
    nvcr.io/nvidia/deepstream:7.1-samples-multiarch

    gst-launch-1.0

        # Dummy "pipeline"
        nvstreammux   name=muxer batch-size=2
    !   nvstreamdemux name=demuxer

        # First source at 25 FPS
        nvurisrcbin "uri=${RTSP_SERVER}/${VIDEO_25FPS}"
    !   muxer.sink_0
        demuxer.src_0
    !   queue
    !   fakesink

        # Second source at 30 FPS
        nvurisrcbin "uri=${RTSP_SERVER}/${VIDEO_30FPS}"
    !   muxer.sink_1
        demuxer.src_1
    !   queue
    !   nvvideoconvert
    !   "video/x-raw,width=640,height=360"
    !   pngenc
    !   multifilesink
            "location=/output/${VIDEO_30FPS}_%05u.png"

)

"${COMMAND[@]}"
  1. This produces png files with no artifacts (I just changed the first video to one with the same FPS)
COMMAND=(

    docker run
    --rm
    --interactive
    --runtime       nvidia
    --env           USE_NEW_NVSTREAMMUX=yes
    --volume        "${OUTPUT_DIR}:/output"
    --name          "${CONTAINER}"
    nvcr.io/nvidia/deepstream:7.1-samples-multiarch

    gst-launch-1.0

        # Dummy "pipeline"
        nvstreammux   name=muxer batch-size=2
    !   nvstreamdemux name=demuxer

        # First source at 30 FPS
        nvurisrcbin "uri=${RTSP_SERVER}/${OTHER_VIDEO_30FPS}"
    !   muxer.sink_0
        demuxer.src_0
    !   queue
    !   fakesink

        # Second source at 30 FPS
        nvurisrcbin "uri=${RTSP_SERVER}/${VIDEO_30FPS}"
    !   muxer.sink_1
        demuxer.src_1
    !   queue
    !   nvvideoconvert
    !   "video/x-raw,width=640,height=360"
    !   pngenc
    !   multifilesink
            "location=/output/${VIDEO_30FPS}_%05u.png"

)

"${COMMAND[@]}"

We can “solve” the issue by adding a videorate and configuring the nvstreammux’s overall-min-fps/overall-max-fps, but the as soon as we add another streams that is outside of this min/max range (like for instance adding a stream at 20fps), the artifacts are showing up again.

Other people have mentioned the same issue in the forum, and all solutions I’ve seen so far seem to rely on us knowing in advance what the fps of the sources are, but we cannot know in our case.

Is there a specific setup/configuration that would allow us to batch streams of any fps ? Or is it a requirement to enforce some fps (or fps range) on all sources ?

Have you set the new nvstreammux parameters as DeepStream SDK FAQ - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums?

Have you set the property “select-rtp-protocol” of nvurisrcbin as 4? Gst-nvurisrcbin — DeepStream documentation

The parameters should be set to adapt to all RTSP streams, that means you need to know the features of all the streams to be added into the pipeline in advance.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.