Hi,
We are working on a application wherein we will get one RGGB frame and one IR frame alternately. Before sending these frame to Jetson ISP we want to do some processing either on GPU or CPU to create one bayer frame from these 2 frames and then send the resultant raw bayer frame to Jetson ISP so
Is it possible to achieve with JEtson camera and SP framework?
Is it possible to get RGGB and IR frames of different sizes like for example let say RGGB frame is of 8MP and IR frame is of 2MP so can we configure the camera interface like this in Jetson to recieve different sizes of frames alternately.
Thanks I guess this is in reference to get the pre processed data feed into the Jetson ISP and that is not supported that is what you mean, right?
Also Is it possible to get frames of different sizes alternately I mean does Jetson’s camera interface support it? and any refernece example you can point me to show if we save these frames(RGGB and IR) into memory then whether we can write some processing code either for GPU or CPU to get a final RAW frame atleast in memory only?
Current the driver design as streaming only you streaming one output size you need to streaming off then switch to another output size(sensor mode) to and streaming it again.
For the reprocess you can check with the MMAPI like 12_camera_v4l2_cuda
The 12_camera_v4l2_cuda required the input image format to be in YUV or processed is, there an application where I get the data in from camera (RAW image data) and use GPU/CPU for processing and can processed frame can get saved in the memory which I can use.