ATCOM: Real-Time Enhancement of Long-Range Imagery

Originally published at: https://developer.nvidia.com/blog/atcom-real-time-enhancement-long-range-imagery/

Imaging over long distances is important for many defense and commercial applications. High-end ground-to-ground, air-to-ground, and ground-to-air systems now routinely image objects several kilometers to several dozen kilometers away; however, this increased range comes at a price. In many scenarios, the limiting factor becomes not the quality of your camera but the atmosphere through which…

Impressive work. How does this approach compare to naive multi-frame registration? I.e. when stills from successive frames are aligned (warped) and blended, without the atmospheric correction kernel.

Good question. We have actually spent a good amount of time looking at this approach as well. In some cases, particularly very light turbulence, its hard to notice much of a difference. Where the bispectrum-based approach seems to shine is when the turbulence becomes significant. That said, when processing in color and depending on the colorspace being used, we may process one or two of the channels as you suggest to save computations. We also leave the option in our software to process completely that way to increase speed when working with low turbulence data.

And not to go too far on a tangent but you raise the more subtle point of what actually is better. How do you objectively compare two enhanced images and say which is better? People have written entire dissertations on this topic. We have even co-authored a paper with one of the leading experts in this space and I still don't think there is a categorical conclusion.

Since our interest is pushing the limits of the technology, we have pursued the bispectrum approach. In our experience, there are scenarios where a more naive approach is sufficient but they are a small subset of cases a more robust method could address (at least for the kinds of data sets we're used to seeing).

I wish I could give you a more quantitative answer but how to arrive at one is still a somewhat philosophical question at this point. (And based on the length of this answer you can see why I had to limit my post and leaving a lot of material on the cutting room floor.)

test

Even semi-naive registration is far off grom what multiframe blind deconvolution can archive. You can find a comparison (for use with astronomy) in the following dissertation: http://hdl.handle.net/10900...
I btw. am currently working on porting it to CUDA with some changes to remove data dependent and convergence critical parameters. It though no longer is online, but still scale O(n log n) as of it's tree-like reduction.