GStreamer pipeline inside docker conatiner produces low quality video

I am running an accelerated GStreamer pipeline inside a docker container on a TX2. Even though everything seems to be working (no errors, video is streaming), the video quality is significantly poorer than when I run the same pipeline outside of the docker environment.

My real pipeline takes an RTSP stream, transcodes it, and distributes it via UDP.

I can reproduce the problem with this simple example pipeline:

gst-launch-1.0 -v -e videotestsrc \
    ! videorate \
    ! video/x-raw, format=\(string\)NV12, framerate=\(fraction\)25/1 \
    ! omxh264enc \
    ! h264parse \
    ! mp4mux \
    ! filesink location=`date +%Y-%m-%d-%H-%M-%S`.mkv &\
sleep 10; \
pkill gst-launch-1.0 -SIGINT

When running the above on the host, a video file of ~34MB is produced. Running it inside a docker container results in a file of ~1.9MB. The video from inside the container is visibly much more compressed.

When running the gstreamer pipeline in verbose mode, the caps information printed is identical inside and outside.


TX2 with Ubuntu 18.04 with the nvidia container runtime installed

sudo dpkg --get-selections | grep nvidia
libnvidia-container-tools			install
libnvidia-container0:arm64			install
nvidia-container-runtime			install
nvidia-container-toolkit			install
nvidia-l4t-3d-core					install
nvidia-l4t-apt-source				install
nvidia-l4t-bootloader				install
nvidia-l4t-camera					install
nvidia-l4t-configs					install
nvidia-l4t-core						install
nvidia-l4t-cuda						install
nvidia-l4t-firmware					install
nvidia-l4t-graphics-demos			install
nvidia-l4t-gstreamer				install
nvidia-l4t-init						install
nvidia-l4t-initrd					install
nvidia-l4t-jetson-io				install
nvidia-l4t-kernel					install
nvidia-l4t-kernel-dtbs				install
nvidia-l4t-kernel-headers			install
nvidia-l4t-libvulkan				install
nvidia-l4t-multimedia				install
nvidia-l4t-multimedia-utils			install
nvidia-l4t-oem-config				install
nvidia-l4t-tools					install
nvidia-l4t-wayland					install
nvidia-l4t-weston					install
nvidia-l4t-x11						install
nvidia-l4t-xusb-firmware			install


Based on, with gstreamer installed via apt
Running with the nvidia runtime,
with the environment variables "NVIDIA_VISIBLE_DEVICES=all", "NVIDIA_DRIVER_CAPABILITIES=all"

How can I get the gstreamer pipeline in the container to perform the same as on the host?

Please set is-live=1 to videotestsrc plugin and use nvv4l2h264enc instead of omxh264enc to run like:

gst-launch-1.0 -v -e videotestsrc is-live=1 \
    ! video/x-raw, format=\(string\)NV12, framerate=\(fraction\)25/1 \
    ! nvvidconv \
    ! nvv4l2h264enc \
    ! h264parse \
    ! matroskamux \
    ! filesink location=`date +%Y-%m-%d-%H-%M-%S`.mkv &\

And may also try CBR mode and set virtual buffer size. Please refer to
Random blockiness in the picture RTSP server-client -Jetson TX2 - #5 by DaneLLL

Hi @DaneLLL

Thank you for your suggestion.
How could those two changes explain why I’m seeing different results inside and outside of docker?

I tried the pipeline you provided. It produces video files of ~2MB both inside and outside the container. Even though the size is about that of the low-quality video I got inside the container before, the video quality looks quite good.

However, with my real pipeline, this does not work well. I get very few frames, maybe one very 5 s.
The real pipeline now looks like this:

gst-launch-1.0 -v -e rtspsrc location=rtsp:// latency=50 ! \ 
rtpjitterbuffer ! \
rtph265depay ! \
video/x-h265, framerate=\(fraction\)25/1 ! \
queue ! \
h265parse ! \
nvv4l2decoder ! \
queue ! \
nvv4l2h264enc !  \
tee name=o ! \
h264parse ! \
mp4mux ! \
filesink location=/opt/data/videos/2021-07-30-12-59-21.mp4 o. ! \
rtph264pay config-interval=1 pt=103 ssrc=31699 ! \
tee name=t ! \
udpsink host= port=8006 async=false t. ! \
udpsink host= port=5002 async=false t. ! \
udpsink host= port=40017 async=false

As you can see I am receiving H265 encoded video (1080p, ~2Mbit/s), transcoding it to H264 and then recording it to file and redistributing to a couple of clients. I also switched the decoder from omxh265dec to nvv4l2decoder.

This used to work fine outside of docker with omx.

Before inserting video/x-h265, framerate=\(fraction\)25/1 after rtph265depay, I was seeing framerate=0/1 in some caps, so I thought I had the issue discussed here:

But after adding the framerate caps, I no longer see any 0 framerates, but the issue persists.

We run the commands on r32.5.1/TX2 and don’t observe the issue. Please try

$ export DISPLAY=:0
$ xhost +
$ sudo docker run -it --rm --net=host --runtime nvidia  -e DISPLAY=$DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix

$ gst-launch-1.0 -v -e videotestsrc is-live=1 num-buffers=250 ! video/x-raw, format=\(string\)NV12, framerate=\(fraction\)25/1 ! nvvidconv ! nvv4l2h264enc ! h264parse ! matroskamux ! filesink location=test.mkv

And see if you can get same result.

You are right, this issue stemmed from a different part of the video streaming pipeline.

Do you have an explanation for the different behavior inside and outside the container?

So you still suffer the issue even though you launch docker by executing:

$ export DISPLAY=:0
$ xhost +
$ sudo docker run -it --rm --net=host --runtime nvidia  -e DISPLAY=$DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix

We would need to reproduce your observation first and then do further investigation. For now we try to reproduce it and see identical result inside and outsid edocker.

No, with nvv4l2h264enc, I don’t have this issue. The quality and the framerate is fine.

But the root issue is unresolved for me. It’s great that with the switch to nvv4l2h264enc, I no longer see different behavior inside and outside the container, but I don’t understand why this difference was there to begin with and why it’s not there now.

We have deprecated omx plugins on Jetpack 4.x releases. The plugins are in the release but not maintained, so it’s possible it doesn’t work in some use-cases. We will remove the plugins in future Jetpack release.

Please use v4l2 plugins in implementing your use-case.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.