Hello,
I started to develop a C++ camera application. I started to reverse engineer the argus api samples. Based on the exposure examples I think it uses the onboard ISP. I want to interact with the camera via i2c to dynamically set the camera registers. Is it possible or I can communicate via the ISP if I wrote a custom application?
hello bence.lukacs,
first of all, there’s OEM firewall. you may see interrupts by doing so.
please see-also Topic 234321 for updating CSI specific firewall settings.
Thank you for your fast response.
I need to understand the content of the topic before re-flash the device. I am investigating the argusAPI samples. But to be honest it is net completely clear to me how can I make any modification or image processing steps on the actual frame.
Based on the oneShot sample I know how to create the IFrame object, but I do not understand how can I interface with the frame over the examples in the samples (denoise, userAutoExposure). I want to implement my own frame statistics, but before jetson I only worked with cameras via OpenCV. But because OpenCV not able to reach the gain and exposure settings of the camera (even with a gstreamer backend) I started to work with argusAPI. But now I am lack of the knowledge even with the simple mechanics like how the actual frame showed on the display.
I saw multiple example for frame acquiring in the samples but the image display is still not clear. I found a video on youtube about ’ Get Started with the JetPack Camera API’ I found a corresponding ppt. Now I understand how to setup and connect to the camera. I would say I understand how to send new capture request (more or less), the previewConsumer got the Outputstream. But how Can I interact with the frame in the Outputsream?
hello bence.lukacs,
please refer to MMAPI documentation for some examples.
https://docs.nvidia.com/jetson/archives/r36.3/ApiReference/group__l4t__mm__test__group.html
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.