Is there a way to process data from camera inputs on Jetson platforms through RTX noise reduction algorithms or other noise reduction algorithms?
may I know which camera sensor you’re working with, is it a bayer sensor?
are you using Jetson Partner Supported Cameras by Jetson Camera Partners on the Jetson platform.
I don’t have a specific camera model yet, but are you saying that it is possoble to run image data through the RTX Noise reduction algorithm?
may I have more details,
are you talking about this RTX Noise Reduction as one of the graphic card supported feature?
No, not necessarily as a graphic card feature, but rather as an AI operation on the raster image data collected from a camera. Camera sensors produce noise from each pixel, both, as dark current that increases with time and temperature, as well as read noise when converting the analogue signal to a digital count that is output to the computer. I wish to be able to train the AI to recognize the difference between a) the patterns of noise and b) the signal that is generated once the camera sensor is activated to detect an object during an exposure period.
Does that make sense to you?
FYI, there’s noise reduction and also bad pixel correction within ISP.
according to Camera Architecture Stack, if you’re using bayer sensors. it’ll going through [Camera Core] and ISP is involved.
I am not using a Bayer sensor, but instead a monochromatic sensor. How does that affect the noise reduction process?
Does a user have control over the noise reduction and bad pixel reduction in “ISP”? Can you define ISP for me?
No, users do not have permission to access the ISP.
please access Xavier TRM and check [Figure 2.1 Xavier Processor Block Diagram].
there’s a ISP (Image Signal Processor) block, which is a hardware engine that be part of the camera processing pipeline.
ISP only support bayer sensors, you may access monochromatic sensor stream via v4l2src;
there’s an example, 12_camera_v4l2_cuda to demonstrates how to capture images from a V4L2 YUV type of camera and share the image stream with CUDA® engines.
I am working with two cameras: 1) ZWO ASI294MM-Pro monochrome camera with USB3.1 connector and 2) Tucsen FL-20BW with the Sony IMX183 CMOS sensor and USB3.1 interface.
Are these supported by the Jetson Nano?
while you’re using camera sensor with USB interface, you can only access sensor stream via v4l2src;
please refer to post #11 for the samples to share the image stream with CUDA engines.