DS 6.2 6.3 h264 encoder compression artifact and HLS parsing problem

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
3090, 4090, 3080 Ti
• DeepStream Version
DS 6.1.1-triton, 6.2-triton, 6.3-gc-triton-devel
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
525.125.06
• Issue Type( questions, new requirements, bugs)
bugs
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
in the article.
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hi All,

This is a follow-up post for compression artifact, as well as fail to parse the RTSP output to multiple formats, for example, HLS in DS6.2, and now DS6.3. Since a new version of Deepstream is being released, I think it is better to open a new post to raise the issues.

To recap, I am using deepstream with rtsp-simple-server and ffmpeg to stream the video analytic output to a web interface.

  1. deepstream will handle the analytic part by ingesting rtsp and output in rtsp psuh stream
  2. The push stream is handled by rtsp-simple-server.
  3. ffmpeg will parse the rtsp stream into hls format for showing on web.

Also, in the pipeline, since I need to do some heavy computation (relative to the real-time requirement), I need to reduce the framerate by using videorate (GStreamer element) or limit the fps from the source.

Example setup:

  1. Setup rtsp-simple-server
docker run --rm -it --network=host bluenviron/mediamtx:latest
  1. Setup deepstream for various version. Below show DS6.1.1 as an example
docker run --gpus all \                                      
-itd --rm --net=host --privileged \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-e DISPLAY=$DISPLAY \
--name=6.1.1 nvcr.io/nvidia/deepstream:6.1.1-triton
  1. Setup testing stream using ffmpeg + deepstream sample video / GStreamer videotestsrc
# sample video, you can install ffmpeg inside the container and stream the output to rtsp-simple-server
# the required library for ffmpeg can be installed using user_additional_install.sh
# using ffmpeg can loop the sample stream indefinitely for easier debugging
ffmpeg -re -fflags +genpts -stream_loop -1 -i /opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 -an -c:v copy -f rtsp rtsp://localhost:8554/raw

# or simply using videotestsrc from GStreamer as fake input
# then there's no need to generate a rtsp stream
gst-launch-1.0 videotestsrc ! autovideosink
  1. Now we have a testing stream ready, we can do a simple decode and encode pipeline to simulate the deepstream analytic part (without actually running inference). To investigate the performance, we will add videorate and clockoverlay to the pipeline to see the effect.
# using the generated rtsp stream, change the input and output ip accordinly
gst-launch-1.0 uridecodebin uri=rtsp://192.168.51.83:8554/raw ! videorate max-rate=6 ! nvvideoconvert ! clockoverlay ! nvvideoconvert ! nvv4l2h264enc ! rtspclientsink location=rtsp://192.168.51.83:8554/611

# or

# using the GStreamer testing src
gst-launch-1.0 videotestsrc ! video/x-raw,width=1920,height=1080 ! videorate max-rate=6 ! nvvideoconvert ! clockoverlay ! nvvideoconvert ! nvv4l2h264enc ! rtspclientsink location=rtsp://localhost:8554/611
  1. Now we can view the generated h264 rtsp stream from deepstream, we can view it using vlc
vlc rtsp://localhost:8554/611
  1. We now need to generate the hls stream using ffmpeg
ffmpeg -rtsp_transport tcp -i rtsp://localhost:8554/611 -an -c:v copy -f hls -hls_time 2 -hls_list_size 3 -start_number 1 -hls_allow_cache 0 -hls_flags +delete_segments+omit_endlist+discont_start test.m3u8
  1. We should be able to see the generated list in .m3u8 and the segment in .ts. We can open it as well using vlc
vlc ./test.m3u8

as an example (left generated rtsp stream, right hls stream) we can see the delay is around several second.


It is quite a bit to digest, let me summaries the flow here.

  1. Take in an rtsp stream (we spent some effort to create a testing one)
  2. do something with deepstream
  3. encode it back to h264 and send to some rtsp server
  4. parse the output rtsp to hls (with ffmpeg)

This work perfectly with deepstream 6.1.1.

Now things go bad when we use deepstream 6.2 and 6.3.

We can redo the above procedure by simply changing the deepstream container. For example

docker run --gpus all \   
-itd --rm --net=host --privileged \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-e DISPLAY=$DISPLAY \
--name=6.2  nvcr.io/nvidia/deepstream:6.2-triton

Let’s first observe the output of deepstream with vlc player.
Left 6.1.1, middle: 6.2, right: 6.3


We can see a lot of compression artifact for DS 6.2 and DS 6.3

Futhermore, parsing rtsp to hls causes trouble in DS6.2 and 6.3

  1. DS 6.2 require setting turning-info-id to 1 in order to work
  2. both 6.2 and 6.3 need to wait for long time before it can actually parse the stream into HLS
  3. The time is similar to the appearance of compression artifact.

Combining the effect, DS 6.2 and 6.3 cause 2 problem.

  1. It takes a long time (around 2 min) for the HLS stream to be ready.
  2. The delay is around 2 min as well.

DS 6.1.1, left rtsp, right HLS, delay around several second

DS 6.2 with tuning-info-id=1, left rtsp, right HLS, delay around several min

DS 6.3, left rtsp, right HLS, delay around several min

The delay is critical since the source may not be stable, if it takes 2 min every time for restart (including HLS reformat), the application is not usable.

As mention before, I brought an 4090 for deepstream applications, where it is currently not usable since DS 6.2 and 6.3 have compression artifact and HLS parsing problem. It will be great if DS 6.2/6.3 can achieve the same performance as DS 6.1.1 did for the encoder part.

Many thanks!

OK. So DeepStream 6.3 don’t need to set the turning-info-id parameter, but still have the delay and artifact problems, is that right? Could you attach the videos with the timestamp to us? Thanks

yes. are the videos in the post good enough?

I have record an example of using ffmpeg command.

left 6.1.1, middle 6.2 without setting tuning-info-id, right 6.3

Are the results the same on all three of your cards(3090, 4090, 3080 Ti)?

Yes. And for your reference the video are record on 3090.

OK. Our Deepstream 6.3 Guide has been released. dgpu-setup-for-ubuntu.
We have good support for the card models described above.
1.For professional/consumer GPUs like GeForce®/NVIDIA RTX/QUADRO supported by driver 530.41.03.
2.You can try to set the idr parameters: force-IDR idrinterval to see if there are still similar issues.

I have checked the guide already.

  1. yes but old deepstream version does not support all the cards.

  2. Yes, I have tried it already. It does not help.

After several tuning on nvv4l2h264enc parameters, any value for idrinterval < 30 will do. when the value is 32. The artifact happen for a brief second.

All sup sequence conversion works perfectly.

1 Like

Hi
I try in deepstream 6.3 and try config nvv4l2h264enc with idrinterval = 25, force-idr=True but not working some device.

IOS safari, Chrome in Ubuntu + Win work.
But Android Chrome + VLC cannot work.

Can you show all the parameters of nvv4l2h264enc?

basically all default parameters, except changing the idrinterval

Is idrinterval only valid for DS 6.3 ?
How much delay are you observing after setting idrinterval.
Does HLS stream work with Chrome Browser ?

I test in DS 6.3
HLS stream work in Chrome Brower Ubuntu + WIN, but Android (Google Pixel) not working.
I feel there is not much delay, about 1-2 seconds for hls to work.

It should depends on you codec as well, HLS is just a protocol, which most browser support. For latency, it depends on your segment length and time for HLS, normally its around 6-10s. For low latency application, I would just pick rtsp/webrtc, or you may want to try the Apple’s LLHLS.

Deepstream version <= 6.1 everything works, but from 6.2 onwards, and now 6.3 everything doesn’t work.
I try HLS + LLHLS, but not working

DS 6.3 with idrinterval set to 30 did the job. But delay in HLS stream is around 8-12 seconds. with x264enc the delay is around 5-7 seconds. Are there any settings which can be explored?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.