ffmpeg using hardware gpu (cuda)


ffmpeg with GPU support is not enabled on Jetson platform.
Please use MMAPI or GStreamer instead.


We also need a GPU enabled ffmpeg version for our recognition project. Could you please provide a few hints how to compile ffmpeg with nvenc and jetson nano support?

1 Like

Where can we get examples of the MMAPI functionality. Per example how to decoding to YUV Scaling YUV file.yuv?


This is not supported as described in #2

After installing whole package thtough SDK Manager, the samples are at /usr/src/nvidia/tegra_multimedia_api
You can begin with 00_video_decode

It looks like I was note clear on my question.

How do you connect YUV Scaling YUV file.yuv as in

00_video_decode 07_video_convert.

I used a FIFO and it works to pass the YUV, but is not as fast if we connect the two components over GPU Memory. So my question is how do you connect the MMAPI samples provided?

I read that the the “input” of the SW goest to the “output buffer” and the output of the SW goes to the capture buffer. So how do you connect those pieces of SW using MMAPI?

If over CLI this is not possible (i.e. passing pointers/metadata) this might mean that only using C++ is possible, in this case can you point me to a program that shows us how this MMAPI thing works in the nVidia platform?

BTW: I also read on the “help” information, that it is possible to pass “Time Stamps”,

--copy-timestamp <st> <fps> Enable copy timestamp with start timestamp(st) in seconds for decode fps(fps) (for input-nalu mode)
	NOTE: copy-timestamp used to demonstrate how timestamp can be associated with an individual H264/H265 frame to achieve video-synchronization

But there is not documentation available in how to achieve this…

Hi rodolfop9lt0,
This topic is about ffmpeg with hardware acceleration, which is confirmed not supported.
Your case looks different and please make a new post.

The topic is about to use NMAP too…

I will post a “new” topic post, as I’m sure there is more people like me who is very frustrated to look and chase for concrete answers in how-to-use the multimedia capabilities of your embryonic product.

We would like to make each topic clear. Your questions are not about 'ffmpeg using hardware gpu (cuda) ’

Done: “MMAPI samples: How to interconnect the samples provided using MMAPI?”

On the end of this video: Raspberry Pi 4B vs Jetson Nano - YouTube it says that NVIDIA sent email to him confirming plans to support ffmpeg. How is it going? I’m very excited to use ffmpeg, please do it!!!

We are still checking to support it.
For clearness, because there are independent hardware encoding/decoding blocks(NVENC and NVDEC) on Jetson Nano, the hardware acceleration will be implemented on the hardware blocks, not on GPU.

1 Like

I didn’t understand. NVENC/NVDEC decode on GPU? And what is a hardware block that is not GPU?

Resuming: will ffmpeg be able to decode 8 1080p streams as in the deepstream example that comes with nano?

1 Like


Please check figure 1 in TX1 TRM. You shall see NVDEC and NVENC hardware blocks, which are not GPU.

Once we have confirmed to support ffmpeg with hard acceleration and completed the implementation, we will check this.

Hello, there are any news in support of FFMPEG on Jetson Nano?


I would also like to know if your still working on getting ffmpeg to use the hardware as we have an application which the jetson would fit perfectly but we need to be able to encode in hevc to make the swap.



ffmpeg support on jetson nano ,support decoding and encoding


Hi jocover,
Many thanks for the sharing.

Hi jocover, thanks for adding decode and encode support.

Would you mind also re-posting your project to the Jetson Projects forum along with a description for added visibility? Thanks!

@Jocover. Thank you so much!!!

It will be great to see hardware scaling into this project, unfortunately my C/C++ Skills are not to the level required, but I can help in other ways to keep this project going…

Also, I just reported a couple of bugs that came from 32.2 and 32.2.1.



Your effort is also affected by those two bugs H.264 bFrames timestamps and H.264i Video Decoding wrong timestamps as it looks they are interpreted as a 2X or fields rather than considering the new rate after converted to progressive video.

In any case. THANK YOU SO much! and again, please let me know how I can help!