Deepstream encoder not working on x86 machines

System information:

  • GPU: 4060 or A4000
  • DeepStream Version 6.4
  • NVIDIA GPU Driver Version: 535.183.01

Hello,
I am currently trying to use the nvv4l2h264enc plugin but I am running into a weird issue. For some reason the plugin seems to fail the initialization of nvenc context, and consequently, fails to allocate memory

Here is the gstreamer pipeline I am using:
gst-launch-1.0 videotestsrc ! nvvideoconvert ! nvv4l2h264enc ! fakesink

Here is the error log I am getting:

Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
ENC_CTX(0x77a3f00215a0) Error in initializing nvenc context 
Redistribute latency...
0:00:00.032246879 72277 0x592999c0cc60 ERROR          v4l2allocator gstv4l2allocator.c:784:gst_v4l2_allocator_start:<nvv4l2h264enc0:pool:src:allocator> error requesting 2 buffers: Cannot allocate memory
0:00:00.032264232 72277 0x592999c0cc60 ERROR         v4l2bufferpool gstv4l2bufferpool.c:1217:gst_v4l2_buffer_pool_start:<nvv4l2h264enc0:pool:src> we received 0 buffer from device '/dev/v4l2-nvenc', we want at least 2
0:00:00.032270794 72277 0x592999c0cc60 ERROR             bufferpool gstbufferpool.c:572:gst_buffer_pool_set_active:<nvv4l2h264enc0:pool:src> start failed
Redistribute latency...
ERROR: from element /GstPipeline:pipeline0/nvv4l2h264enc:nvv4l2h264enc0: Could not get/set settings from/on resource.

Upon further inspection I noticed the “device” property of the nvv4l2h264enc plugin is set to “/dev/v4l2-nvenc” by default, and, since this property is read only it cannot be changed. I also know the nvv4l2decoder plugin is working correctly and the default “device” property is “/dev/nvidia0”.
If I list all available devices in “/dev” I do not see “/dev/v4l2-nvenc”, perhaps I am missing a system dependency of some kind, but I find it strange that the decoder and encoder plugin have different default “device” property values.

Is have tried to symlink “/dev/v4l2-nvenc” to “/dev/nvidia0” in an attempt to fix this issue, however, this did not work.

Am I missing something or is this behavior not expected?

Thanks,
Francisco

I think this problem is caused by not installing it correctly

Try /opt/nvidia/deepstream/deepstream/install.sh

I have ran the script and the issue seems to persist, perhaps I am missing some kind of dependency or maybe some driver flag is not enabled? Do I need to do anything specific during the installation process in order to enable hw encoding?
Also I don’t know if this is relevant but I’m running this inside a docker container

Try the following command line to start docker.

docker run --gpus all -it --rm --net=host --privileged -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-6.4 nvcr.io/nvidia/deepstream:6.4-triton-multiarch

I am running a custom container which is being launched by docker-compose, however, in my docker compose file I already specify the container to run in privileged mode with all the correct GPU flags, in fact, I am using multiple nvidia technologies and all of them work, except for the nvv4l2enc plugins

1.If you use the command above, does it work?

2.How do you build your custom image, can you share your approach?

For deepstream docker, we recommend GitHub - NVIDIA-AI-IOT/deepstream_dockers: A project demonstrating how to make DeepStream docker images.

Or you can try to update nvidia-container-toolkit and docker.

https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html

The container you suggested does seems to be working properly. In order to install deepstream we are getting the deb from the official nvidia repo and installing during the docker build process, the issue I am having is probably there. I will review the link you have sent me regarding deepstream containers and I will get back to you

I must run a ubuntu22.04 custom container and I am facing the same problem.

Is there anywhere I can base myself in order to run a container with deepstream-6.4 and nvv4l2h264enc?

My container is based on this one deepstream_dockers/x86_64/ubuntu_base_runtime/Dockerfile at main · NVIDIA-AI-IOT/deepstream_dockers · GitHub.

Thanks in advance,
Nelson

First make sure nvidia-container-toolkit is installed, then try adding the --gpus all --privileged parameter. If that doesn’t solve the problem, please open a new topic.

https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html

I have nvidia-container-toolkit installed and --gpus all --privileged parameter is already in the container.
I am able to run nvidia-smi inside the container.

My docker container has ubuntu-22.04 with cuda-12.2 and gstreamer-1.24.5 and deepstream-6.4 installed.

But somehow, I am not able to use the nvv4l2h264enc plugin.

If I run the following pipeline:
gst-launch-1.0 videotestsrc ! nvvideoconvert ! nvv4l2h264enc ! fakesink

I got this:

Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
ENC_CTX(0x7c936401fc10) Error in initializing nvenc context 
Redistribute latency...
ERROR: from element /GstPipeline:pipeline0/nvv4l2h264enc:nvv4l2h264enc0: Could not get/set settings from/on resource.
Additional debug info:
gstv4l2object.c(3565): gst_v4l2_object_set_format_full (): /GstPipeline:pipeline0/nvv4l2h264enc:nvv4l2h264enc0:
Device is in streaming mode
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
Freeing pipeline ...

Is there anything I am missing during gstreamer or deepstream installations?

  1. Which version of the gpu driver are you using? This error may be caused by a GPU driver version mismatch.
  1. DS-6.4 is compatible with gstreamer 1.20.3.

3.Please open a new topic for your issues. thanks

https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_Installation.html#id9