Electronically Assisted Astronomy with a Jetson Nano

It is a fascinating topic to combine noise removal and stabilization. Are you familiar with tensor fields or constructive solid geometry? I’m reminded of programs that mechanical engineers use to predict how a building will behave under stress, e.g., during an earthquake.

Correct me if I am wrong, but is a single frame modified based on a previous frame? Or does the stabilization have the ability to look at multiple frames in a row?

I am not familiar at all with tensor fields or solid geometry ! Too difficult for my brain.

Stabilization only looks at the present frame. It does not look at multiple frames.

Stabilization and turbulence management are at their very beginning in JetsonSky.

As i said, stabilization uses an opencv function (matchingTemplate or something like that). It is very basic but works quite fine.

Turbulence management is close to noise removal in fact. The theory is that turbulence free video will be really static video for deep sky or planetary video.

When i have significant changes in a pixel value, it means that turbulence changes the signal. Therefore, i must avoid big changes in the pixel value (but i must also allow changes).

This ethod can give good results with small turbulence magnitude. Pixel value will get changes in the time and changes will always be around the right value. If i low down the changes, i will stay around an stable value and i will get an almost turbulence free pixel value which will be quite close to the signal reality.

It is quite the same thing with noise management but it is in fact the contrary. Noise means quite small changes in the pixel value. Turbulence can bring very high changes in the pixel value.

With noise management, i allow big changes and don"t allow small changes.

With turbulence management, i allow small changes and don’t allow big changes.

The big advantage with JetsonSky is that i can apply all those filters in a single pass but i must choose the right order for the filters(stabilization, then noise removal or turbulence management or both ; it also works, then colour management, then signal enhancement, then sharpness management).

For sure, the RAW capture must be as good as possible if we expect to get good filters results.

It is really interesting to make theory (build filters in the software), test the theory (deep sky captures with treatments) validate what is working and change what does not.

I need to make more tests but weather here is not the best for astronomy.

Alain

To know something changed takes two measurements…so you are in fact stacking two frames. The pixel value in one frame, versus that same pixel in the next frame. You’re slowing down the change from one frame to the next.

This is just for fun, but what you’re doing is looking at the first derivative of a given pixel from one frame to the next…the rate of change. Knowing a gradient or slope takes two points. If you were to use a third point, then you could also consider acceleration…how fast it changes. You’re sort of doing that now when you treat lots of change different from tiny change. The second derivative would require three frames of one pixel, and would do the same thing, but it would do it with a smooth continuous formula. What makes that interesting is that AI can use that. I even saw a paper about fast simple methods to take a derivative of all the pixel changes between two frames some time back, but it was for tracking on drones. You could even use a fourth frame and get the third derivative…latency would go up, but both noise reduction and stabilization might improve (AI could certainly use that to learn from).

What you have now works quite well, and it is fast and low latency. Sometimes though I can’t help myself and think about how to do something better even when it doesn’t need to be better! 😀

1 Like

Using several frames is really interesting (only for videos for sure, can’t do that with a single picture).

Classical noise removal filters use neighbor pixels and this destroy the sharpness of the image. Working with several frames on a single pixel preserve sharpness and this is really important for astronomy.

Details in the image depends of the diameter of the mirror (or the lens) and of course of the wavelength of the signal you catch . To get more details, you need bigger mirror (or lens) and this means a lot of money. Losing details is too bad.

As you said, you can analyze the changes between 2 or more images. I will continue to work in that way.

For example, my last filter works with 3 frames. It is called 3 frames noise removal (3FNR). It looks at the changes value but it also looks at the way the values are changing (positive or negative).

If the changes always go the same way (positive or negative), it means i probably get a real change in the signal value (real increase or decrease). If i get opposite ways, it can means this is noise artifact.

This means i allow more changes when i get same way changes and i allow small changes when i get opposite change ways.

This is really useful when my video is not static (i move the mount) because i can get big changes in each pixel value due to the movement of the mount. This prevent ghosting on the video.

I really think there is a big potential working with several frames for video noise removal and from what i know, no one did this job. This method preserve sharpness and do not destroy small signal information. In that way, it is much better than classical bilateral filter, gaussian filter, KNN, fast NLM2 filters or things like that.

Just have now to change potential in reality.

Alain

1 Like

Yes, the goal is to find out which neighbor pixel, in the next frame, is the closest representation to the previous frame’s pixel. In other words, if there is atmospheric distortion, but not noise, then the pixel drifts to a new neighbor location. Knowing this information about which new pixel map represents the prior from on the new frame means you could stretch or compress or skew to push those pixels back to their original location.

For noise, if the noise is part of the sensor, this would be based on the pixel rather than the data. One could include that to make finding if a pixel has drifted easier (after compensating for closed shutter noise…which might not matter…then knowing a pixel has moved to a different pixel would be more straight-forward). Rather than losing data you would move it back to where it was originally by remapping pixels. This is similar to an active lens which adjusts quickly, except that it is performed after the sensor image has been created (versus trying to keep the original image stable, we fix what the active lens would have fixed through image manipulation).

Doing this with more frames allows more precise knowledge since it is equivalent to seeing a higher order derivative. One frame is scalar, and there isn’t much you can do. Two frames is linear, or first order, and if the distortion is constant, then it is perfect and you don’t need to go further. However, atmospheric distortion is not constant; over a very short time it may approximate a linear change (first order). If the turbulence changes over time, then more frames can measure a higher order acceleration or describe the change better. If the turbulence is more or less “transcendental”, and thus moves around like a wheel which is not perfectly round and which distorts (a combination of trig functions), then those frames can be used to instead perform a Fourier decomposition and determine the exact function and shape of the distortion. You’d get the best possible stabilization.

Regarding that last paragraph, if you know the functions (at least over a short time for several frames), then one could pre-correct based on expected movement from pixels changing due to atmospheric distortion. This could mean lower latency once you have a few frames in; software could later (meaning a frame later or several frames later or after the session) fix any pre-correction that wasn’t quite perfect. Having more than two frames of vertically stacked pixel-over-time data means predicting the error in real time before it happens.

I’m guessing that if the mount orientation data is included, then this could be used to make the distortion correction easier. You’d compensate for a new camera angle, and then run this; you’d still have the ability to move pixels back to where they “should” be, but there would be less work than if you did not correct first for actual mount movement.

1 Like

It is hard to work with neighbor pixel to manage turbulence.

Professional telescopes use mechanical system to change the mirror surface to correct light trajectory. Turbulence can move theoretical focus point very far and the result of the turbulence combines many pixel values. I don’t think it is really possible to replace each information on the right pixel.

If turbulence only means deformation of the perfect image, we could only manage the deformation to restore the right geometry of the target.

This subject is very difficult and at least too difficult for me.

I focus on easier things like writing a colour saturation enhancement with cupy raw kernel instead of using torchvision function. It works now !

Alain

1 Like

I have released a new JetsonSky version on Github : V41_11RC

  • some bugs fixed
  • new 3FNR noise removal filter (better with dynamic images and subjects)
  • colour saturation enhancement is better and faster with this version

Alain

1 Like

Hi Alain, I’m a jetson (nano) enthusiast.
I congratulate you for the work done.
I found your post very interesting, it really intrigued me and little by little I think I’ve read all the “episodes”.
Since my kids are a little older now and are getting interested in astronomy, I thought I’d use it and see how it works. I wonder if the currently available version also works on Jetson nano.
Thank you.
Greetings
Giulio

1 Like

Hello Guilio,

many thanks for your congratulations.

Jetsonsky must be ok with Jetson Nano. I made it run with a Nano but it was some years ago. I suppose it still works. If it does, it won’t run very fast unfortunately. An Orin Nano would be more suitable.

If it does not work, just tell me what’s going on. I will try to solve the issues.

JetsonSky also works with classical Windows PC but it needs a NVidia GPU.

Have a nice day.

Alain

Hello,

still working on JetsonSky, especially noise removal filters, trying to get some good results on a single frame without loosing details in the picture.

The weather is bad so i can’t make outdoor tests.

I have tested last JetsonSky version with AGX Orin and Xavier NX. It works fine without problem.

I made some quick treatments with some Hubble Space Telescope pictures. Here are some of the results, just for the beauty of the sky :


That’s all.

Alain

1 Like

Wow, those are amazing!

1 Like

Many thanks linuxdev !

Hi everyone,

i have released a new JetsonSky version : V42_01RC.

It is a kind of major release because i added and changes many things :

  • Some bugs removed
  • Speed optimazation
  • No longer needs Pytorch & TorchVision any more
  • Improve noise removal filter 3FNR
  • Added new noise removal Filter called NR P2. This filter works on a single frame, like NR P1, KNN & NLM2 filters. 3FNR, AADF and Variation reduction filters works with several frames
  • Added CLL filter which is a Low Light Contrast enhancement filter. It calculates a LUT to modify Low Lights. You can see LUT chacking TRCLL Checkbox and see how the 3 parameters change the LUT.

The software is starting to be really useful for deep sky captures.

Concerning noise management, i have now many filters.

Filters that work on a single frame :
NR P1 (homemade filter)
NR P2 (homemade filter)
KNN (from NVidia code example)
NLM2 (from NVidia code example)

Filters that work on a multiple frames :
3FNR (homemade filter)
AADF (homemade filter)
Variation reduction (homemade filter)

My homemade filters works really great, are really fast and give very good results. With a powerful GPU, we can make live treatment with those filters.

I think NVidia should try them very seriously and talk with me about their use by NVidia (adding those filters in VPI for example or other things).

I know writing code without AI is crappy today but i really recommend NVidia to test my software and see if they can use some parts of JetsonSky. But i will survive if they don’t !

The link to JetsonSky as usual :

Alain

My mistake for Jetson Linux version of JetsonSky : i have uploaded a new version : V42_02RC.

I get better result if i use OpenCV split or merge function instead of cupy equivalent function.

Alain

Hello,

i have released a new version of JetsonSky : V42_04RC.

I made some changes to improve fps with Windows version only. Why ? Because my laptop gets many CUDA cores and performs better with CUDA than with OpenCV sometimes.

I wrote a debayer image function using CUPY RAW kernel for 2 reasons :

  • it is faster then classical OpenCV cvt.color function (which is already quite fast)
  • i can manage the debayer algorithm to get better sharpness in the RGB image

I manage RGGB, BGGR, GBRB and GRBG bayer patterns.

I also added a hot pixel removal function but it needs RAW video/capture to work fine. THe function does not work with a debayer video/capture.

Everything seems to work fine.

The actual JetsonSky interface (windows and Linux get the same interface) :


Linux version gets exactly the same functions as Windows version but some functions are made with the CPU instead of the GPU because CPU will be faster (i guess opencv uses OpenCL and maybe multi CPU algorithms).

Things are going quite well with JetsonSky development. I still have to replace OpenCV cv2.resize function with a CUPY RAW kernel function to get some few fps improvements. But i must say JetsonSky starts to be fully usable for my purpose.

To bad i arrived with JetsonSky 5 years too late. Now, the magic word is AI. AI there, AI here, AI everywhere. As JetsonSky is 100% AI free, it is crappy and old fashion software because it has no AI inside.

Even NVidia is 100% AI now. Nothing exists except AI. I must get too old and need retirement.

Anyway. I will continue without AI. I should survive.

Bye.

Alain

1 Like

Hello,

as i said, JetsonSky has reached its objectives. Now, i have to make some small optimizations but i think the Job is done.

V42_02RC version is the last version i will upload on github. It works for both Linux and Windows versions.

Since 2019 June the 21th, start of this topic, many things have been done. It was interesting.

I would like to thanks NVidia for its support and especially @dusty_nv for its great help.

Now, i will concentrate my work on Windows platform with my laptop (RTX4080 brings huge capabilities with its Cuda cores).

Many thanks to all people who discussed with me all those years. I hope my work inspired them a bit.

But it is time now to end this long topic.

If i get some time interesting videos, i will post them here.

I wish you interesting projects with NVidia Jetson SBC.

Clear sky.

Alain

1 Like

Hi,

a test on the Moon yesterday with my scope. I used JetsonSky for color enhancement.


Alain

An example of treatment with JetsonSky on the Moon.

Turbulence was really high and useful to test turbulence management.

Alain

A new test about colours of the Moon 's soils.

I am not very pleased with the YouTube version of the video because it is not as good as the real video. But I have solved a colour blinking problem. The colours are now stable except when the mount move for sure. Moving video is harder to manage.

Alain

An example of use of JetsonSky on a Moon RAW image (C9.25, IMX485MC sensor) to retrieve the colours of the soils. Single pass treatment (multiple filters) :

I focus my efforts on PC but JetsonSky is the same for Jetson & PC.

Alain