however because nvenc/nvdec hasnt been implemented on the nano yet, the command would use this library to support the hardware encoder/decoder in ffmpeg and use this library (POCL) to add openCL support.
This should be able to use the decoding block on any of the current jetson model, keep in gpu system ram, utilize openCL 1.2 filters, once again keep it in gpu ram, encode to h264 to an output file.
Would test myself on hardware, but don’t have the spare budget to commit to the device if this isn’t going to work. [Computer engineering student on a tight budget, looking for advice and if someone with hardware might be able to help me out.]
Yes, in NVIDIA package, hardware encoding is not enabled.
Since we have independent hardware engines NVENC and NVDEC on Jetson platforms, we usually leverage the engines and leave GPU for doing deep learning inference. Don’t have experience to enable OpenCL ffmpeg. May see if others can share suggestion on this.
Thanks for the reply! I’ve looked through the docks, but unfortunately one of the requirements for my project is the use of FFMPEG.
The pipeline would be to decode in the hardware decoder, do the filtering/image processing on the gpu (tone mapping and resizing) then pass that to the hardware encoder to be encoded to the relevant format. [Which would then be sent to a different device]
Essentially, between both the current nvidia ffmpeg build and the version created by @jocover , I don’t think I need to worry about being able to leverage the build in decode/encode engines.