@JerryChang yes , it is grey scale, we have modified the DT to set it as RGGB
If your sensor does not support RGGB, it does not work by only modifying device tree.
Does the sensor support YUV422 such as UYVY or YUYV? The bottleneck looks to be in copying date from CPU buffer to NVMM buffer. The operation cannot achieve 1920x1204 60fps. For YUV422, the data can be captured into NVMM buffer directly to avoid the memcpy.
Here is a method for capturing frame data into NvBuffer directly:
Gstnvv4l2camerasrc with GRAY8 support
Please check the topic. If your source is able to adjust pitch, width height to fit data alignment of NvBuffer, the additional memory copy can be eliminated.
Thank you for the support
After setting the pitch=width (1792)
Modified the nvv4l2camerasrc to support GREY8, fpsdisplaysink runs @60fps
Able to run the
gst-launch-1.0 nvv4l2camerasrc bufapi-version=true ! "video/x-raw(memory:NVMM), format=(string)GRAY8, width=(int)1792, height=(int)1204, framerate=(fraction)60/1" ! fpsdisplaysink video-sink=fakesink text-overlay=false -e -v
GstPipeline:pipeline0/GstFPSDisplaySink:fpsdisplaysink0: last-message = rendered: 1861, dropped: 0, current: 60.32, average: 60.31
withe encoder I am able to achieve only 45fps
We want to support RAW12 (monochrome sensor also)
which pipeline supports for RAW12
RAW12 is not supported on Jetson Nano.
Thanks for the confirmation
Apart RAW8, what are the other RAW modes supported in Jetson nano
The sensor we use support RAW8, RAW10, RAW12, RAW14, RAW16 (RAW8/10/12/14/16),
These formats are not supported through ISP engine on Jetson Nano. So a possible solution for capturing frame data in the formats, you can capture into CUDA buffer and implement CUDA code for debayering. For capturing frame date into CUDA buffer, please take a look at
I see only
NvBufferColorFormat_GRAY8 is present in
There is no other grey formats supported part of the enum