Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
3090, 4090, 3080 Ti
• DeepStream Version
DS 6.1.1-triton, 6.2-triton, 6.3-gc-triton-devel
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
525.125.06
• Issue Type( questions, new requirements, bugs)
bugs
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
in the article.
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
Hi All,
This is a follow-up post for compression artifact, as well as fail to parse the RTSP output to multiple formats, for example, HLS in DS6.2, and now DS6.3. Since a new version of Deepstream is being released, I think it is better to open a new post to raise the issues.
To recap, I am using deepstream
with rtsp-simple-server
and ffmpeg
to stream the video analytic output to a web interface.
deepstream
will handle the analytic part by ingestingrtsp
and output inrtsp
psuh stream- The push stream is handled by
rtsp-simple-server
. ffmpeg
will parse thertsp
stream intohls
format for showing on web.
Also, in the pipeline, since I need to do some heavy computation (relative to the real-time requirement), I need to reduce the framerate by using videorate
(GStreamer element) or limit the fps from the source.
Example setup:
- Setup
rtsp-simple-server
docker run --rm -it --network=host bluenviron/mediamtx:latest
- Setup
deepstream
for various version. Below show DS6.1.1 as an example
docker run --gpus all \
-itd --rm --net=host --privileged \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-e DISPLAY=$DISPLAY \
--name=6.1.1 nvcr.io/nvidia/deepstream:6.1.1-triton
- Setup testing stream using ffmpeg + deepstream sample video / GStreamer
videotestsrc
# sample video, you can install ffmpeg inside the container and stream the output to rtsp-simple-server
# the required library for ffmpeg can be installed using user_additional_install.sh
# using ffmpeg can loop the sample stream indefinitely for easier debugging
ffmpeg -re -fflags +genpts -stream_loop -1 -i /opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 -an -c:v copy -f rtsp rtsp://localhost:8554/raw
# or simply using videotestsrc from GStreamer as fake input
# then there's no need to generate a rtsp stream
gst-launch-1.0 videotestsrc ! autovideosink
- Now we have a testing stream ready, we can do a simple decode and encode pipeline to simulate the deepstream analytic part (without actually running inference). To investigate the performance, we will add
videorate
andclockoverlay
to the pipeline to see the effect.
# using the generated rtsp stream, change the input and output ip accordinly
gst-launch-1.0 uridecodebin uri=rtsp://192.168.51.83:8554/raw ! videorate max-rate=6 ! nvvideoconvert ! clockoverlay ! nvvideoconvert ! nvv4l2h264enc ! rtspclientsink location=rtsp://192.168.51.83:8554/611
# or
# using the GStreamer testing src
gst-launch-1.0 videotestsrc ! video/x-raw,width=1920,height=1080 ! videorate max-rate=6 ! nvvideoconvert ! clockoverlay ! nvvideoconvert ! nvv4l2h264enc ! rtspclientsink location=rtsp://localhost:8554/611
- Now we can view the generated h264 rtsp stream from
deepstream
, we can view it usingvlc
vlc rtsp://localhost:8554/611
- We now need to generate the hls stream using
ffmpeg
ffmpeg -rtsp_transport tcp -i rtsp://localhost:8554/611 -an -c:v copy -f hls -hls_time 2 -hls_list_size 3 -start_number 1 -hls_allow_cache 0 -hls_flags +delete_segments+omit_endlist+discont_start test.m3u8
- We should be able to see the generated list in
.m3u8
and the segment in.ts
. We can open it as well usingvlc
vlc ./test.m3u8
as an example (left generated rtsp stream, right hls stream) we can see the delay is around several second.
It is quite a bit to digest, let me summaries the flow here.
- Take in an rtsp stream (we spent some effort to create a testing one)
- do something with deepstream
- encode it back to h264 and send to some rtsp server
- parse the output rtsp to hls (with ffmpeg)
This work perfectly with deepstream 6.1.1.
Now things go bad when we use deepstream 6.2 and 6.3.
We can redo the above procedure by simply changing the deepstream container. For example
docker run --gpus all \
-itd --rm --net=host --privileged \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-e DISPLAY=$DISPLAY \
--name=6.2 nvcr.io/nvidia/deepstream:6.2-triton
Let’s first observe the output of deepstream with vlc player.
Left 6.1.1, middle: 6.2, right: 6.3
We can see a lot of compression artifact for DS 6.2 and DS 6.3
Futhermore, parsing rtsp
to hls
causes trouble in DS6.2 and 6.3
- DS 6.2 require setting
turning-info-id
to 1 in order to work - both 6.2 and 6.3 need to wait for long time before it can actually parse the stream into HLS
- The time is similar to the appearance of compression artifact.
Combining the effect, DS 6.2 and 6.3 cause 2 problem.
- It takes a long time (around 2 min) for the HLS stream to be ready.
- The delay is around 2 min as well.
DS 6.1.1, left rtsp, right HLS, delay around several second
DS 6.2 with tuning-info-id=1, left rtsp, right HLS, delay around several min
DS 6.3, left rtsp, right HLS, delay around several min
The delay is critical since the source may not be stable, if it takes 2 min every time for restart (including HLS reformat), the application is not usable.
As mention before, I brought an 4090 for deepstream applications, where it is currently not usable since DS 6.2 and 6.3 have compression artifact and HLS parsing problem. It will be great if DS 6.2/6.3 can achieve the same performance as DS 6.1.1 did for the encoder part.
Many thanks!