I have written a program which architecture is opencv + gstreamer + nvarguscamerasrc to implement read image data function.
After getting frame I wrote a de-Bayer function to convert my image data, then port to CUDA program.
I don’t want raw data to be compressed. So, I want to get the Bayer data which is first porting in the AGX Xavier.
Below is my Development environment:
AGX Xavier
Jetpack 4.5.1(L4t 32.5.1)
camera: Bayer camera(without i2c)
I have some direction.
use gst + v4l2src. (but I am afraid this architecture does not support bayer camera?)
MMAPI + libargus. (Maybe cannot bypass ISP? )
V4l2 program (as I know, v4l2 uses ioctl to setting/get frame, I am not sure if it work without i2c environment? )
I have no idea which is more possible to get original bayer image data?
could you give me some advice?
Thanks.
Hi,
A possible solution it to have sensor driver ready for v4l2. So that you can capture frame data through v4l2 ioctl. And then you can run this sample app:
/usr/src/jetson_multimedia_api/samples/v4l2cuda
And the frames are captured and put in CUDA buffer. You can implement de-bayering through CUDA.
You may check with camera vendors for the sensor driver. Generally Bayer sensors are with i2c. Certain old sensors are with spi. A bit strange your Bayer sensor works without i2c.