How can i encoder 264 from raw data on nano?

hi, teamer:
We want to caputre raw data(bits12) from mipi-csi interface and encoded it in h264 format and save on SSD, but i can’t found gstreamer comand on doc,Anyone can help me?
By the way,I found i can’t used nvv4l2h264enc on this platform ,it report error:
ERROR: from element /GstPipeline:pipeline0/nvv4l2h264enc:nvv4l2h264enc0: Cannot identify device ‘/dev/v4l2-nvenc’.

and my test command is blow:
gst-launch-1.0 videotestsrc num-buffers=30 !
‘video/x-raw, width=(int)1280, height=(int)720,
format=(string)I420, framerate=(fraction)30/1’ ! nvvidconv !
‘video/x-raw(memory:NVMM), format=(string)I420’ ! nvv4l2h264enc
iframeinterval=100 ! h264parse ! qtmux ! filesink
location=test.mp4 -e

Hi,
Orin Nano does not have hardware encoders. Please use software encoder such as x264enc plugin.

But if we h264 encoded by soft cpu it need so high cpu resource when we captured 2 way mipi data about 17M pixs video. Can we used GPU to encode ?

Hi,

This is not supported on Jetson platforms. For having hardware encoding, please consider use Orin NX or Xavier NX platforms.

From this doc( CUDA GPUs - Compute Capability | NVIDIA Developer),it seemed support h264 encode functions by GPU, but i can’t found some sammple from cuda sample about it( GitHub - NVIDIA/cuda-samples: Samples for CUDA Developers which demonstrates features in CUDA Toolkit)
And i can’t find some api about it too( CUDA Runtime API :: CUDA Toolkit Documentation (nvidia.com))

Hi,
There should be no existing CUDA code for H264 encoding. You may need to implement it if you would like to utilize GPU engine.

thanks! we are going to switch orign nano to nx platform,by the way,how can we convert the video data format from raw12 to nv12 for H264 encoding?can we used nvvidconv to do it?

Hi,
The nvvidconv plugin doesn’t support the conversion. A possible solution is to capture frame data to CUDA buffer and implement CUDA code to do it. It’s suggested to use jetson_multimedi_api for this use-case.

Is your camera source a Bayer sensor? If it is Bayer sensor, the other solution is to use ISP engine for debayering and get YUV420 frame data.

thanks DaneLLL
The nvvidconv plugin doesn’t support the conversion. A possible solution is to capture frame data to CUDA buffer and implement CUDA code to do it. It’s suggested to use jetson_multimedi_api for this use-case.

I have studied the cuda-samples for a long time and I can't find the sample to do it. Can you told me which example i can refer to?


Is your camera source a Bayer sensor? If it is Bayer sensor, the other solution is to use ISP engine for debayering and get YUV420 frame data.
~~~~~~~~~~~~~~~~~~~~~~~~~~
Yes,we use a  Bayer sensor and just change v4l2 driver based on nv_imx219.c , how can i used ISP engine? I can get raw Bayer   data by v4l2 api  now.

Hi,
We have the sensor driver programming guide in developer guide:
Camera Development — NVIDIA Jetson Linux Developer Guide 1 documentation

Please take a look and give it a try.

hi,DaneLLL,I have test success with demo isp imx219,but I fount it used isp config file nvcam_cache_1.bin of this sensor,We want to use ourself sensor with no isp config file(only used raw ->yuv), how can i do? thanks