I used a FIFO and it works to pass the YUV, but is not as fast if we connect the two components over GPU Memory. So my question is how do you connect the MMAPI samples provided?
I read that the the “input” of the SW goest to the “output buffer” and the output of the SW goes to the capture buffer. So how do you connect those pieces of SW using MMAPI?
If over CLI this is not possible (i.e. passing pointers/metadata) this might mean that only using C++ is possible, in this case can you point me to a program that shows us how this MMAPI thing works in the nVidia platform?
BTW: I also read on the “help” information, that it is possible to pass “Time Stamps”,
--copy-timestamp <st> <fps> Enable copy timestamp with start timestamp(st) in seconds for decode fps(fps) (for input-nalu mode)
NOTE: copy-timestamp used to demonstrate how timestamp can be associated with an individual H264/H265 frame to achieve video-synchronization
But there is not documentation available in how to achieve this…
I will post a “new” topic post, as I’m sure there is more people like me who is very frustrated to look and chase for concrete answers in how-to-use the multimedia capabilities of your embryonic product.
On the end of this video: Raspberry Pi 4B vs Jetson Nano - YouTube it says that NVIDIA sent email to him confirming plans to support ffmpeg. How is it going? I’m very excited to use ffmpeg, please do it!!!
We are still checking to support it.
For clearness, because there are independent hardware encoding/decoding blocks(NVENC and NVDEC) on Jetson Nano, the hardware acceleration will be implemented on the hardware blocks, not on GPU.
I would also like to know if your still working on getting ffmpeg to use the hardware as we have an application which the jetson would fit perfectly but we need to be able to encode in hevc to make the swap.