GPU : Quadro T400
Driver : 510.47.03
CUDA : 11.6
OS : Debian 11 (Headless)
I am trying to implement NVENC hardware acceleration through ffmpeg. However, when I try to use the CUDA hardware accelerator, the following error is shown :
ffmpeg -y -hwaccel cuda -hwaccel_output_format cuda -i input.mkv -c:a copy -c:v h264_nvenc output.mkv
[h264 @ 0x55b885e76c00] decoder->cvdl->cuvidGetDecoderCaps(&caps) failed -> CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected [h264 @ 0x55b885e76c00] Failed setup for format cuda: hwaccel initialisation returned error.
This error keeps showing up even though the GPU seems to be detected as CUDA-capable.
I installed the cuda toolkit and packages following the official installation guide (Installation Guide Linux :: CUDA Toolkit Documentation), and compiled ffmpeg following the following guide (Using FFmpeg with NVIDIA GPU Hardware Acceleration :: NVIDIA Video Codec SDK Documentation)
./deviceQuery Output :
CUDA Device Query (Runtime API) version (CUDART static linking) Detected 1 CUDA Capable device(s) Device 0: "NVIDIA T400" CUDA Driver Version / Runtime Version 11.6 / 11.6 CUDA Capability Major/Minor version number: 7.5 Total amount of global memory: 1874 MBytes (1964769280 bytes) (006) Multiprocessors, (064) CUDA Cores/MP: 384 CUDA Cores GPU Max Clock rate: 1425 MHz (1.42 GHz) Memory Clock rate: 5001 Mhz Memory Bus Width: 64-bit L2 Cache Size: 524288 bytes Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384) Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers Total amount of constant memory: 65536 bytes Total amount of shared memory per block: 49152 bytes Total shared memory per multiprocessor: 65536 bytes Total number of registers available per block: 65536 Warp size: 32 Maximum number of threads per multiprocessor: 1024 Maximum number of threads per block: 1024 Max dimension size of a thread block (x,y,z): (1024, 1024, 64) Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535) Maximum memory pitch: 2147483647 bytes Texture alignment: 512 bytes Concurrent copy and kernel execution: Yes with 3 copy engine(s) Run time limit on kernels: No Integrated GPU sharing Host Memory: No Support host page-locked memory mapping: Yes Alignment requirement for Surfaces: Yes Device has ECC support: Disabled Device supports Unified Addressing (UVA): Yes Device supports Managed Memory: Yes Device supports Compute Preemption: Yes Supports Cooperative Kernel Launch: Yes Supports MultiDevice Co-op Kernel Launch: Yes Device PCI Domain ID / Bus ID / location ID: 0 / 6 / 0 Compute Mode: < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) > deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 11.6, CUDA Runtime Version = 11.6, NumDevs = 1 Result = PASS
nvidia-smi output :
Thanks to anyone who could have a clue about the origin of this issue.