I would like to use TX1’s ISP funtion, to make a camera product . And I did not use the 3A(AF/AE/AWB) fuction inside, but use my own 3A algorithm. So, I need to evaluate 2 points: First, if the ISP fuction meet the my requirements; Second,if the 3A statistics match my own 3A Alg. I would like to find the detailed datasheet about ISP , But only found very simple like these:
-Bayer domain hardware noise reduction
-Per-channel black-level compensation
-High-order lens-shading compensation
-3x3 color transform
-Bad pixel correction
-Programmable coefficients for de-mosaic with color artifact reduction
Color Artifact Reduction: a two-level (horizontal and vertical) low-pass filtering scheme that is used to reduce/remove
any color artifacts that may result from Bayer signal processing and the effects of sampling an image.
-Enhanced down scaling quality
-Edge Enhancement
Attachments
It is too simple , And My question is:
1)if “noise reduction” including temporal niose reduction(TNR) and spatial noise reduction?
2)if "Bad pixel corrction " including static bad pixel correction and dynamic correction?(and can correct how many pixels most with static bad pixel correction?)
3)if the TX1 support D-zoom fution ,and max d-zoom rate is ?
4)about 3A statistics :
A) AE ---- if the histograms is the all picture’s or only some select zones?
B) AWE---- if the region averages exclude any pixel ,such as some big red pixel which not gray pixel?
C) AF----- what kind of filter ? FIR or IIR ? and how many filter? how many regions provide (12*12,or more?)?
how many frames delay in the TX1 ISP process? such as “mipi sensor—>VI(1frame)—>ISP(1frame)—>VPSS(1frame)—>VO(1frame), 4frame delay for local HDMI Display”.
can you provide more datasheet about the isp fuction detaild introduce, and the softwarwe control interface?
Hi All
There’s some data that public below.
And the there’s a doc located the tegra_multimedia_api/argus/docs/Argus.0.96.pdf, and some useful sample code for tegra_multimedia_api/argus/samples/bayerAverageMap cudaHistogram …
Thanks for your reply.
I had read the document and video you provided. And I found that, this is all most about the app level, which describing how to open a camera device, and get video stream. But what i need is in the driver level(frameworks level, kernal level), which i can config the isp parameter such as RGB2RGB Matrix,Gamma table,demosaicing, noise reduce .... these fuction is provided by TX1's hardware isp pipeline, and not provided by GPU.
the page 11 of "getting-started-jetpack-camera-api.pdf" document showing the ISP subsysterm function,this is help me to understand the isp pipeline. but i need more information about the isp pipeline, such as the question above:
1)if "noise reduction" including temporal niose reduction(TNR) and spatial noise reduction?
2)if "Bad pixel corrction " including static bad pixel correction and dynamic correction?(and can
correct how many pixels most with static bad pixel correction?)
3)if the TX1 support D-zoom fution ,and max d-zoom rate is ?
4)about 3A statistics :
A) AE ---- if the histograms is the all picture's or only some select zones?
B) AWE---- if the region averages exclude any pixel ,such as some big red pixel which not
gray pixel?
C) AF----- what kind of filter ? FIR or IIR ? and how many filter? how many regions provide
(12*12,or more?)?
5) how many frames delay in the TX1 ISP process? such as "mipi sensor--->VI(1frame)--->ISP
(1frame)--->VPSS(1frame)--->VO(1frame), 4frame delay for local HDMI Display".
can you show me more details about the isp pipeline & isp turning tool, and the source interfaces in frameworks/kernal to config the isp paremeter.
No static bad pixel correction, only dynamic bad pixel correction.
digital zoom is supported. It’s through VIC, so we’d need to look up those limits. Think it can be pretty large.
Histogram is full image
Region averages exclude clipped pixels. The API docu have more about Bayer Average Map.
We expose a 64x64 sharpness map using an FIR filter.
Thanks for your kindly reply . it really helps me . could i ask for some more detail?
1)Histogram is full image ----- in the MM API , user can get x-axis counter by “histogram.size()” interface. but in my opinion, the x-axis counter shuld is a const, such as 256(0~255) for 8bit or 1024(0~1023)for 10bit. why to provide this “histogram.size()” interface? cause it need to return 256 for 8bit mipi input,and need to return 1024 for 10bit mipi input?
2)We expose a 64x64 sharpness map using an FIR filter-----Gerenally, it should provide two FIR filter,one is high-pass filter, the other is low-pass filter. cause when using the optical zoom lens ,the focus range may be very long, it can focus at the 0.3M ~ 20M. when the image is very fuzzy, the high-pass filter’s value is mostly equal to zero, at this time we need to use the low-pass filter to decide the lens motor moving deriction. So, I worried about that the TX-1 Only provide a sharpess map using only a FIR filter, how to adapt the optical zoom lens’ AF?
Thanks for your kindly reply . it really helps me . could i ask for some more detail?
1)Histogram is full image ----- in the MM API , user can get x-axis counter by “histogram.size()” interface. but in my opinion, the x-axis counter shuld is a const, such as 256(0~255) for 8bit or 1024(0~1023)for 10bit. why to provide this “histogram.size()” interface? cause it need to return 256 for 8bit mipi input,and need to return 1024 for 10bit mipi input?
2)We expose a 64x64 sharpness map using an FIR filter-----Gerenally, it should provide two FIR filter,one is high-pass filter, the other is low-pass filter. cause when using the optical zoom lens ,the focus range may be very long, it can focus at the 0.3M ~ 20M. when the image is very fuzzy, the high-pass filter’s value is mostly equal to zero, at this time we need to use the low-pass filter to decide the lens motor moving deriction. So, I worried about that the TX-1 Only provide a sharpess map using only a FIR filter, how to adapt the optical zoom lens’ AF?
Attachments
The current version of libArgus has not been validated for autofocus systems. There are some preliminary API controls that we expect will support user-layer autofocus logic, but this has not been a lower priority area for our development and validation efforts. We are familiar with the limitations of our sharpness map when the lens is significantly out of ofocus. One way that a lower frequency sharpness score can be obtained for use along with the high frequency sharpness map is to run a small convolution over the region of interest on the bayer sharpness map. The amount of data is relatively small and so this can be done with CPU. Another possibility is to request a downscaled output from the ISP and use CUDA to compute a custom sharpness metric.
libArgus is designed so that it could be used on a variety of hardware. Different ISPs might choose to use a different number of bins for their histograms to trade off hardware cost with precision. Allowing the application to query the size means that we won’t force unnecessary remapping operations that could consume CPU cycles and would prevent the application from seeing the raw data generated by the ISP.
And for the image quality tuning we suggest to have the scaling partner to help.