hw video encoder needs to be initialized on the host before it can be used in a container?

It seems if I try to use the hw encoder in a container without using it on the host beforehand, it will fail to initialize and no longer works even on the host side after that.
Is that expected? Is there any way to initialize the encoder from a container safely?

I’m using NVIDIA Container Runtime on Jetson.

well, I added this line:
lib, /lib/firmware/tegra18x/nvhost_nvenc061.fw

and it seems the encoder can be initialized from a container.
Is this safe to do, or any reason why the fw file is not listed in l4t.csv?


Thanks for sharing the solution.

We are evaluating to support it. Right now we need forum users’ help to share similar experiences.

Hi tadayuki.okada,
Do you find out the root cause? I met very similar case. But your solution is not working for me.Here is my post:

Hi luisyin,

I assume the root cause is that some firmware files are not visible inside the container, so some hw resources can’t be initialized.

If you can show your gstreamer pipeline, I might be able to tell you which files need to be added in the container.

Thanks for your reply.Here is how I build the pipeline(Hardware decoder is what I need):

pipeline_str = g_strdup_printf("rtspsrc location=%s latency=0 name=d d. ! queue ! capsfilter caps=application/x-rtp,media=video ! decodebin3 ! nvvidconv ! videorate ! video/x-raw,framerate=1/1 ! nvjpegenc ! appsink name=video_sink d. ! queue ! capsfilter caps=application/x-rtp,media=audio ! decodebin ! audioconvert ! audioresample ! audio/x-raw,format=S16LE,rate=16000,channels=1 ! audiobuffersplit output-buffer-duration=1/1 ! appsink name=audio_sink", rtsp_url);

For now, my workaround is:
1.after system reboot, run below pipeline to do some warmup:

gst-launch-1.0 filesrc location=sample_720p.mp4 ! decodebin3 ! nvjpegenc ! fakesink

2.Bring up the docker container

BTW,here is how I start my container:

docker run -e LD_LIBRARY_PATH=:/usr/lib/aarch64-linux-gnu:/usr/lib/aarch64-linux-gnu/tegra:/usr/local/cuda/lib64 --net=host  -v /usr/lib/aarch64-linux-gnu:/usr/lib/aarch64-linux-gnu -v /usr/local/cuda/lib64:/usr/local/cuda/lib64  --device=/dev/nvhost-ctrl --device=/dev/nvhost-ctrl-gpu --device=/dev/nvhost-prof-gpu --device=/dev/nvmap --device=/dev/nvhost-gpu --device=/dev/nvhost-vic --device=/dev/nvhost-nvdec --device=/dev/nvhost-nvjpg --device=/dev/nvhost-as-gpu my-gstreamer-container

As you are using the jpeg encoder, you probably need to add this line to l4t.csv:
lib, /lib/firmware/tegra18x/nvhost_nvjpg011.fw

Also, I don’t see “–runtime nvidia” in your docker command line. Have you changed the default configuration so that you don’t need to specify it?

Then, you should check if /lib/firmware/tegra18x/nvhost_nvjpg011.fw is visible inside the container.

Thanks, tadayuki.okada. I did have

lib, /lib/firmware/tegra18x/nvhost_nvjpg011.fw

in l4t.csv before. But it’s not working. I didn’t enable the “–runtime nvidia” option. It seems that it’s not necessary.

After mapping “/lib/firmware” into the container, everything seems ok after reboot.

docker run -e LD_LIBRARY_PATH=:/usr/lib/aarch64-linux-gnu:/usr/lib/aarch64-linux-gnu/tegra:/usr/local/cuda/lib64 --net=host  -v /usr/lib/aarch64-linux-gnu:/usr/lib/aarch64-linux-gnu -v /lib/firmware:/lib/firmware -v /usr/local/cuda/lib64:/usr/local/cuda/lib64  --device=/dev/nvhost-ctrl --device=/dev/nvhost-ctrl-gpu --device=/dev/nvhost-prof-gpu --device=/dev/nvmap --device=/dev/nvhost-gpu --device=/dev/nvhost-vic --device=/dev/nvhost-nvdec --device=/dev/nvhost-nvjpg --device=/dev/nvhost-as-gpu my_container

Thanks again.