The same version of ffmpeg is installed on both machines. A ton of CUDA and NVidia packages (all?) are also installed (more than on my machine with the RTX 2060). So… how do I get ffmpeg to use hardware acceleration?
Note 1: I’ve been able to use the acceleration through gstreamer, but I can’t use that in my software for I need two videos as textures. So I really need to get the acceleration and it has to work with a library that gives me the necessary flexibility to use more than just a video output of a movie on the display.
Note 2: When I open the nvidia-settings tool on my Desktop with the RTX 2060, I have an entry with a field named “Video Engine Utilization” which shows me how much, in %, the video dec/enc is used right now. It is not there on the Jetson. Any way to get that information somehow? A command line maybe? That way I could clearly verify whether the hardware is used or not.
Hi,
There is individual hardware encoder/decoder on Jetson platfoms and the implementation is different from desktop GPUs. We have enabled hardware decoding. Please refer to development guide.
I am curious if that works for Jetson Agx Xavier nvidia dev kit also.
Based on the document I can see that I have to build FFMPEG for Jetson device but I could not build it on jetson successfully.
nvv4l2dec_init_decoder() works on general purpose ffmpeg?
That looks good. We’ll try those functions and see what happens.
Any way to see the stats in a console, especially the percent usage of the GPU and VEU (Video Engine Utilization)? The same we see in the nvidia-settings
Thats only a bare description for function calls … we tried every possible way to get compile a code using the calls without success … eventually , I think its blind without a suitable reference may be with example.
In the sample, decoded YUVs are in NvBuffer and we can call NvEGLImageFromFd() to get EGLImage:
/**
* Creates an instance of EGLImage from a DMABUF FD.
*
* @param[in] display An \ref EGLDisplay object used during the creation
* of the EGLImage. If NULL, nvbuf_utils() uses
* its own instance of EGLDisplay.
* @param[in] dmabuf_fd DMABUF FD of the buffer from which the EGLImage
* is to be created.
*
* @returns `EGLImageKHR` for success, `NULL` for failure
*/
EGLImageKHR NvEGLImageFromFd (EGLDisplay display, int dmabuf_fd);
And can perform CUDA operation:
/**
* Performs CUDA Operations on egl image.
*
* @param image : EGL image
*/
static void
Handle_EGLImage(EGLImageKHR image)
{
CUresult status;
CUeglFrame eglFrame;
CUgraphicsResource pResource = NULL;
cudaFree(0);
status = cuGraphicsEGLRegisterImage(&pResource, image,
CU_GRAPHICS_MAP_RESOURCE_FLAGS_NONE);
if (status != CUDA_SUCCESS)
{
printf("cuGraphicsEGLRegisterImage failed: %d, cuda process stop\n",
status);
return;
}
status = cuGraphicsResourceGetMappedEglFrame(&eglFrame, pResource, 0, 0);
if (status != CUDA_SUCCESS)
{
printf("cuGraphicsSubResourceGetMappedArray failed\n");
}
status = cuCtxSynchronize();
if (status != CUDA_SUCCESS)
{
printf("cuCtxSynchronize failed\n");
}
if (eglFrame.frameType == CU_EGL_FRAME_TYPE_PITCH)
{
//Rect label in plan Y, you can replace this with any cuda algorithms.
addLabels((CUdeviceptr) eglFrame.frame.pPitch[0], eglFrame.pitch);
}
status = cuCtxSynchronize();
if (status != CUDA_SUCCESS)
{
printf("cuCtxSynchronize failed after memcpy\n");
}
status = cuGraphicsUnregisterResource(pResource);
if (status != CUDA_SUCCESS)
{
printf("cuGraphicsEGLUnRegisterResource failed: %d\n", status);
}
}
Please check the samples and see if it can be applied to your usecase. Thanks.