I have written a program which architecture is opencv + gstreamer + nvarguscamerasrc to implement read image data function.
After getting frame I wrote a de-Bayer function to convert my image data, then port to CUDA program.
I don’t want raw data to be compressed. So, I want to get the Bayer data which is first porting in the AGX Xavier.
Below is my Development environment:
Jetpack 4.5.1(L4t 32.5.1)
camera: Bayer camera(without i2c)
I have some direction.
use gst + v4l2src. (but I am afraid this architecture does not support bayer camera?)
MMAPI + libargus. (Maybe cannot bypass ISP? )
V4l2 program (as I know, v4l2 uses ioctl to setting/get frame, I am not sure if it work without i2c environment? )
I have no idea which is more possible to get original bayer image data?
could you give me some advice?
A possible solution it to have sensor driver ready for v4l2. So that you can capture frame data through v4l2 ioctl. And then you can run this sample app:
And the frames are captured and put in CUDA buffer. You can implement de-bayering through CUDA.
You may check with camera vendors for the sensor driver. Generally Bayer sensors are with i2c. Certain old sensors are with spi. A bit strange your Bayer sensor works without i2c.
have sensor driver ready for v4l2.
→ Do you mean I need to modify v4l2 Kernel file? Or do I need to write a C code to have sensor ready for v4l2?
If I can use i2c to initial my camera via external device, then it can start porting bayer data. Is it your mean to have sensor driver ready for v4l2?
For porting sensor driver, please refer to sensor driver programming guide
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.