• Hardware Platform (Jetson / GPU) NVIDIA A2 • DeepStream Version 6.1.1 • NVIDIA GPU Driver Version (valid for GPU only) 530.41.03 • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
We are getting some errors that we want to hardware decode, and encode(nvv4l2h264enc) on A2 GPU. These errors are as follows.
ERROR: from element /GstPipeline:pipeline0/nvv4l2h264enc:nvv4l2h264enc0: Failed to process frame.
Additional debug info:
gstv4l2videoenc.c(1398): gst_v4l2_video_enc_handle_frame (): /GstPipeline:pipeline0/nvv4l2h264enc:nvv4l2h264enc0:
Maybe be due to not enough memory or failing driver
Execution ended after 0:00:01.205040711
Setting pipeline to NULL ...
Cuda failure: status=702
nvbufsurface: Error(-1) in releasing cuda memory
Cuda failure: status=702
nvbufsurface: Error(-1) in releasing cuda memory
Cuda failure: status=702
nvbufsurface: Error(-1) in releasing cuda memory
Cuda failure: status=702
nvbufsurface: Error(-1) in releasing cuda memory
Cuda failure: status=702
nvbufsurface: Error(-1) in releasing cuda memory
Cuda failure: status=702
nvbufsurface: Error(-1) in releasing cuda memory
Freeing pipeline ...
About GPU
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 530.41.03 Driver Version: 530.41.03 CUDA Version: 12.1 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA A2 Off| 00000000:C4:00.0 Off | 0 |
| 0% 42C P8 9W / 60W| 9MiB / 15356MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| 0 N/A N/A 1123 G /usr/lib/xorg/Xorg 4MiB |
| 0 N/A N/A 2354 G /usr/lib/xorg/Xorg 4MiB |
+---------------------------------------------------------------------------------------+
We got a similar error when we tried 530, 525 and 515 as driver versions.
But when we downgraded the driver version to 470, we saw that the pipeline worked without errors.
Since we are using Deepstream 6.1.1, we need a more up-to-date CUDA version, so we cannot use driver version 470.
We need support about the cause and solution of these errors.
Not sure, I have no experience with your platform, but the bitrate seems very low to me. nvv4l2h264enc may expect bitrate in bits/s, while x264enc or other encoder may specify bitrate in Kbits/s. Try bitrate=800000.
@Honey_Patouceul
I tried with nvv4l2h264enc bitrate=800000 but unfortunately the result did not change and I still get the following error.
nvbufsurface: NvBufSurfaceSysToHWCopy: failed in mem copy
nvbufsurface: NvBufSurfaceCopy: failed to copy
nvbufsurface: NvBufSurfaceSysToHWCopy: failed in mem copy
nvbufsurface: NvBufSurfaceCopy: failed to copy
ERROR in BufSurfacecopy
0:00:07.192074749 198 0x561c14517520 ERROR v4l2bufferpool gstv4l2bufferpool.c:2388:gst_v4l2_buffer_pool_process:<nvv4l2h264enc0:pool:sink> failed to prepare data
ERROR: from element /GstPipeline:pipeline0/nvv4l2h264enc:nvv4l2h264enc0: Failed to process frame.
Additional debug info:
gstv4l2videoenc.c(1398): gst_v4l2_video_enc_handle_frame (): /GstPipeline:pipeline0/nvv4l2h264enc:nvv4l2h264enc0:
Maybe be due to not enough memory or failing driver
Execution ended after 0:00:07.066906214
Setting pipeline to NULL ...
Cuda failure: status=702
nvbufsurface: Error(-1) in releasing cuda memory
Cuda failure: status=702
nvbufsurface: Error(-1) in releasing cuda memory
Cuda failure: status=702
nvbufsurface: Error(-1) in releasing cuda memory
Cuda failure: status=702
nvbufsurface: Error(-1) in releasing cuda memory
Cuda failure: status=702
nvbufsurface: Error(-1) in releasing cuda memory
Cuda failure: status=702
nvbufsurface: Error(-1) in releasing cuda memory
Freeing pipeline ...
By the way, this pipeline runs smoothly(without any error) on RTX 2070 Super GPU with 520 driver, GTX 1080 Ti with 525 driver and T4 GPU with 530 driver installed. So the problem occurs when running on A2 GPU.
I also tried with deepstream 6.2 and driver version 530.41.03. But the result did not change, we still keep getting the same error.
nvbufsurface: NvBufSurfaceSysToHWCopy: failed in mem copy
nvbufsurface: NvBufSurfaceCopy: failed to copy
nvbufsurface: NvBufSurfaceSysToHWCopy: failed in mem copy
nvbufsurface: NvBufSurfaceCopy: failed to copy
ERROR in BufSurfacecopy
0:00:06.463089905 160 0x556520b8a980 ERROR v4l2bufferpool gstv4l2bufferpool.c:2388:gst_v4l2_buffer_pool_process:<nvv4l2h264enc0:pool:sink> failed to prepare data
ERROR: from element /GstPipeline:pipeline0/nvv4l2h264enc:nvv4l2h264enc0: Failed to process frame.
Additional debug info:
gstv4l2videoenc.c(1489): gst_v4l2_video_enc_handle_frame (): /GstPipeline:pipeline0/nvv4l2h264enc:nvv4l2h264enc0:
Maybe be due to not enough memory or failing driver
Execution ended after 0:00:06.366975802
Setting pipeline to NULL ...
The latest update, I tried the same pipeline with A10 GPU and driver version 525 and it worked fine (no errors). But with the same driver version and pipeline, we still get the error on A2 GPU.
A10(Error free, working successfully) GPU and driver information.
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.105.17 Driver Version: 525.105.17 CUDA Version: 12.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA A10G Off | 00000000:00:1E.0 Off | 0 |
| 0% 33C P0 58W / 300W | 0MiB / 23028MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
We can connect to the stream published with pipeline, the ffprobe result is as follows.
We do our development and testing with the official Nvidia docker image. The docker image we use: nvcr.io/nvidia/deepstream:6.1.1-triton. We use Gstreamer, not FFmpeg. Gstreamer uses nvv4l2h264dec and nvv4l2h264enc developed by Nvidia and included in the docker image as a decoder/encoder.
While the same pipeline does not cause problems in GPUs such as A10, T4, 2070 Super and 1080Ti unfortunately, the A2 also has a problem. We will also test it with the A30, but I don’t think there will be any problems.
Did you try to run a deepstream pipeline with HW encoder/decoder on A2 GPU?
I don’t have A2 GPU,I have tried it on T4 and A3000 GPU,even jetson orin.
I think it’s a bug of codec driver.
But DeepStream can’t resolved it, I had report it to codec driver team.
Maybe you can got some help from here
Setting pipeline to PLAYING ...
New clock: GstSystemClock
nvbufsurface: NvBufSurfaceSysToHWCopy: failed in mem copy
nvbufsurface: NvBufSurfaceCopy: failed to copy
ERROR in BufSurfacecopy
0:00:06.304596854 124 0x55e79a768aa0 ERROR v4l2bufferpool gstv4l2bufferpool.c:2388:gst_v4l2_buffer_pool_process:<nvv4l2h264enc0:pool:sink> failed to prepare data
ERROR: from element /GstPipeline:pipeline0/nvv4l2h264enc:nvv4l2h264enc0: Failed to process frame.
Additional debug info:
gstv4l2videoenc.c(1398): gst_v4l2_video_enc_handle_frame (): /GstPipeline:pipeline0/nvv4l2h264enc:nvv4l2h264enc0:
Maybe be due to not enough memory or failing driver
Execution ended after 0:00:04.647833737
Setting pipeline to NULL ...
Cuda failure: status=702
nvbufsurface: Error(-1) in releasing cuda memory
Cuda failure: status=702
nvbufsurface: Error(-1) in releasing cuda memory
Cuda failure: status=702
nvbufsurface: Error(-1) in releasing cuda memory
Cuda failure: status=702
nvbufsurface: Error(-1) in releasing cuda memory
Cuda failure: status=702
nvbufsurface: Error(-1) in releasing cuda memory
Cuda failure: status=702
nvbufsurface: Error(-1) in releasing cuda memory
Freeing pipeline ...
The GPU and driver version we use are as below.
Tue Jul 4 10:55:03 2023
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.125.06 Driver Version: 525.125.06 CUDA Version: 12.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA A2 Off | 00000000:C4:00.0 Off | 0 |
| 0% 38C P8 5W / 60W | 9MiB / 15356MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
We do these operations in a docker container using the nvcr.io/nvidia/deepstream:6.1.1-triton image.
I ran the below pipeline with driver version 525 in the container we created with the nvcr.io/nvidia/deepstream:6.1.1-triton docker image, but i kept getting the errors, so nothing changed.
But when I downgraded the driver version from 525 to 470, it worked without any error.
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks
Can driver version 470 with cuda-compat-12-0 help you workaround ?