I know that docker support is still considered experimental, just wanted to find out if this is a known issue or if I am missing something.
I am using a Jetson TX2 dev board with Jetpack 4.3 (recently flashed) and installed the l4t-base:r32.3.1 and deepstream-l4t:4.0.2-19.12-samples images.
I am running the following pipeline* on both the host and the docker container:
gst-launch-1.0 -e videotestsrc num-buffers=300 ! timeoverlay ! \
'video/x-raw, format=(string)I420, width=(int)1280, height=(int)720' ! \
omxh264enc bitrate=8000000 ! \
'video/x-h264, stream-format=(string)byte-stream' ! h264parse ! \
qtmux ! filesink location=$FILE -e
When I look at the output video created in the docker container, the encoded quality is much lower than one created by the host machine. I can also see that in the file size:
$ ls -l videotestsrc_*.mp4
-rw-r--r-- 1 root root 612163 Feb 17 13:24 videotestsrc_docker.mp4
-rw-rw-rw- 1 slt slt 5876646 Feb 17 13:24 videotestsrc_host.mp4
Downloads: docker output, host output.
This is how I run the docker container:
docker run -it --rm --net=host --runtime nvidia -e DISPLAY=:1 \
-v /tmp/.X11-unix/:/tmp/.X11-unix \
-v /tmp/argus_socket:/tmp/argus_socket \
-v $PWD:/workdir \
nvcr.io/nvidia/l4t-base:r32.3.1
I tried running the pipeline with GST_DEBUG=omx*:4 but did not see any difference in the output. I also tried the deepstream-l4t image just in case but got the same results.
Any ideas what is going on? Am I missing a configuration option in omxh264enc or the docker container itself? Thanks!
*I also tried the following pipeline to make sure it’s not an issue with videotestsrc:
gst-launch-1.0 -e nvarguscamerasrc ! \
'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)NV12, framerate=(fraction)30/1' ! \
nvvidconv ! 'video/x-raw(memory:NVMM), format=(string)I420' ! omxh264enc bitrate=8000000 ! \
'video/x-h264, stream-format=(string)byte-stream' ! \
h264parse ! qtmux ! filesink location=$FILE -e