I’m working through understanding the image sensor capture stack and am hoping someone can help clarify what level of ISP support we have with the new 24.2 release. I’m trying to figure out what hooks are available to configure the ISP for a raw bayer sensor other than the OV5693. The docs state:
Which folders are the pre-defined folders and where are the initial ISP configuration files for the reference sensors? If these locations are mentioned in the driver package docs I apologize … I’ve either missed them or it’s unclear which section refers to them.
Thanks in advance!
Thanks for your inquiry …raw sensor support for Jetson L4T involves not only V4L2 sensor driver development but also passing conformance test, image tuning and sensor calibration etc. It is somewhat a long and complicated process that is not feasibly supported in this Forum at the moment. For that, NVIDIA enables camera partner Leopard Imaging to provide support for our platform. Here is more info for your reference,
Is it true that with openKcam we could configure the ISP? Is there any way to use the ISP with a different sensor in R24.2?
Our libargus camera API is originally based on openKcam and for details of libargus, you could refer to Argus directory of MM API package after Jetpack installation. Look for Argus.0.95.pdf under tegra_multimedia_api/argus/docs directory.
As I mentioned previously in the thread, besides writing v4l2 sensor driver, there are a few subsequent non-trivial tasks to be done for a raw sensor to fully utilize ISP functions and we have to rely on camera partners for that.
In the r24.2 release notes which you could simply download from JEP portal, that covers how to use another sensor Sony IMX185 in Jetson platform. What raw sensor you have in mind to develop?
In general we have been developing several camera drivers for different customers for several years, several customers asked us to create drivers for tegra X1 like IMX219, Toshiba high resolution camera G2 41Mp, ov5647, ov5640, etc and they also need a way to use the ISP instead of using CUDA. We always expose the driver functionality through gstreamer so I was looking for options to use the ISP as well. I posted here about other option:
Could you check if my understanding is correct?
For sensor like ov5640 that can output YUV, you could actually develop your v4l2 sensor driver and not going through TX1 ISP path. For other raw sensors that Tegra ISP processing is required, except default OV5693, that’s the steps I mentioned earlier and requires assistance from our camera partners.
After you get the frame data, then you could use GPU/CUDA or scaler for subsequent image data processing. Thus ISP and CUDA are not ‘either or’ but they both provide different image data processing functionality at different stage of data path.
‘Camera Architecture Stack’ inside Developer’s Guide provides an overview of various of data path, including using or bypass Tegra ISP.