Electronically Assisted Astronomy with a Jetson Nano

Thanks Thom !

The best for you and your family too !

Alain

I made a new filter for JetsonSky. It is supposed to manage atmospheric turbulences for planetary captures.

This filter prevents big changes in each pixel value because big changes come from atmospheric turbulence (the real image is static).

This filter also manage camera sensor noise. It works better than AADF but only for static images.

The very first tests :

Alain

The improvement with the filter is dramatic. I am curious about something though, and I really am not sure how to ask, but do your NN models look at Laplacian edge detection? It seems like after the filter you’ve applied is done, a second pass which maps where edges “should” appear could be used to stabilize it further (the down side would of course be needing more frames and perhaps a second pass, so it wouldn’t be quite as real time).

Hi linuxdev,

my filter is the most simple filter i could make : it allow variations in a range i set (from 1 to 20%) between image N-1 and image N. If variation is under my threshold, i keep the variation. If the variation is over the threshold, i apply max allowed variation.

I made a test with 2 passes and 2nd pass improve image stabilization with lower noise. This means i can apply a sharpen filter with the 2nd pass.

The filter is very simple but the problem is that my target must be as static as possible. My mount can’t bring me such static video. For now, i use an other program to stabilize the target (PIPP) but it would be more confortable if i can stabilize the target with JetsonSky. Something to do later i guess, when i will have time.

Alain

Being able to stabilize it to that degree in real time adds so much clarity. It makes me want to see what you could do if you had a reference laser and dynamically adjustable lens! We’d be looking at something at least as good as the old Hubble telescope from the ground. But the whole point is to see what an average human with few resources can do, and you have outdone yourself! It is art.

Hi linuxdev,

many thanks for your comment ! I was lucky with this idea and this filter. Sometimes, simple idea gives good result, sometimes not. I guess we will get some adaptive optic for amateur one day but it will be quite complex and expensive. My filter is more affordable !

JetsonSky starts to get interesting functions and now, i need to make sky captures to test those functions. AADF gives good results with dynamic target (allow high changes in pixel value and disable small changes) and reduce variations filter gives good results with static targets (allow small changes in pixel value and disable high changes).

To be honest, when i started to learn Python to make JetsonSky 5 years ago, i did not expect to go so far !

Alain

Hi everyone,

Happy New Year !

Alain

For this new year, i have released a new version of JetsonSky on Github : V41_06RC.

This version gets the atmospheric turbulence management filter.

That’s all.

Alain

Hello,

good news. I have successfully implemented video stabilization in JetsonSky. First tests are great.

Well, i did invent nothing this time. I use OpenCV function cv2.matchTemplate. It works fine with the vieos i make with JetsonSky.

This will be more comfortable to stabilize the video with JetsonSky rather than a 3rd party software.

Now, i need to add many tests in JetsonSky to prevent errors while using stabilization function.

Really cool. I am a bit happy with this function.

Alain

Hello,

i have released a new version of JetsonSky : V41_07RC

I have removes old V41_03RC & V41_06RC because i had a bug when forcing B&W image with a colour camera. This problem is solved with new 41_07RC version.

The changes :

  • Some bugs removed
  • add a checkbox (STAB) to enable image stabilization
  • add a checkbox (TRF) to enable Turbulence Reduction Filter. Associated with a slider from 0.5 to 10 (the lower is the value, the lower the turbulence will be)
  • add the possibility to get the un debayer video (for colour video) if checkbox “Filter ON” is unchecked if a camera is plugged. If you save a video, it will be uncompressed format with an single channel video (you will to debayer it with further use).

I guess there are still bugs in this new version. I need to make more tests for sure.

An example of image stabilization with turbulence & noise treatment, using JetsonSky :

Alain

Hello,

yesterday, the sky was pretty good so i decided to use my Celestron C9.25 to make tests with the Moon and Orion Nebula M42.

First, the Moon.

The C9.25 was at F/D 16, that is to say the focal was 3760mm. The camera uses an IMX485 colour sensor.

I used JetsonSky with post treatment :

  • image stabilization
  • noise and turbulence management
  • sharpness enhancement

Then, i applied classical treatment (Autostakkert 3 and Registax 6).

The result :


For a colour camera (with bayer matrix), the result is very good. Quite near a monochrome camera.

Now, M42.

The C9.25 focal was 740mm. I Used HDR, noise management, turbulence management, etc. The result is interesting but unfortunately, the Moon was here so


The result (video) :

It is encouraging test.

Alain

I made a comparison video with my of my last Moon capture. RAW video VS enhanced video (4K video) :

I must say it works quite fine.

To be honest, with classical planetary treatments (autostakkert 3 and Registax 6), we get very small differences with RAW video and enhanced video.

But enhanced video is really great when you make live observation. The images and much clean and you can get some details you won’t see with classical video (RAW without treatment). Enhanced video gives you something more real and it is really cool to get such result with live observation.

Alain

Hello,

from my last captures, with post treatment with JetsonSky : colours of the Moon’s soils (4K video) :

That’s all.

Alain

Hi,

i worked a bit with my M42 video to try to get a better result :

As it was a test to see what parts of JetsonSky needs improvements, the result is not bad and it could have better if M42 was higher in the sky, if the Moon was somewhere else and if you would get a better setup of the mount.

I hope other tests will get better conditions. If the conditions are good for deep sky video, i think i can get the final result during live capture with a single pass treatment.

Alain

Hello,

an other example of video enhancement with JetsonSky.

I wrote an other filter to manage camera noise. It’s called 3 frames noise removal filter. It uses N-2, N-1 and N filter to manage the noise and i must say it works quite fine. It can be used with other noise removal filters like AADF. That is to say i can use several noise removal filters in a single pass. It useful to manage noise AND atmospheric turbulence.

An example here :

I apply several filters in a single pass. The main goals are :

  • get a static image
  • remove the noise
  • apply sharpen filter

If the video is not clean you won’t be able to sharpen it.

The improvement is terrific, in my opinion for sure.

Alain

1 Like

Hi,

some examples of what we can get with filtering and multiple filtering (one pass filtering).

In the video, 8 videos with different treatments. The last video gives what we would see if camera sensor was noise free and if turbulence was equal to zero.

Alain

The stabilization is impressive. One thing I’m curious about, other than just stabilizing, is if you’ve ever used noise filtering with the lens cap on for long exposures? I’m just thinking it is easier to stabilize if there is no noise, and it is easier to filter noise if there is no atmospheric turbulence. Anyone operating such a camera might take a long image with the lens cover for later use to subtract from actual images, but I have not heard of offering the long dark exposure for noise as part of AI input. If AI knew the noise for exposure times from short to longer and could predict what to subtract prior to motion stabilizing (or other noise reduction), then perhaps the stabilizing would be even better (right now though it is rather good).

Hello Linuxdev,

i tried image stabilization with clean image (after noise removal processing). It works but i get some artifacts. For now, i do not understand what’s going on.

If i don’t mind the artifacts, it is true stabilization is better so i have to understand why i get strange results even if stabilization is better.

Dark subtraction is quite classical but old dark subtraction method is a bit boring as you now. You need to gat the right dark which depends on exposure time, gain and sensor temperature. As i change exposure time and gain very often during the capture, it is not the good way to do this.

If AI was able to calculate the right dark, using at least gain and exposure time informations, it would be really cool. But to be honest, i don’t know how to do that ! AI is on of the biggest mystery for me. Maybe one day, i will understand AI but for now, my understanding of AI is equal to zero.

Alain

1 Like

Does the noise removal itself add artifacts? Or is it the combination of noise removal followed by stabilization that causes the artifacts?

I’m not much of an AI guy, but did you perform stabilization training at some step using images which did not have dark substitution? I’m thinking that maybe training is expecting the noise, and that you’d have to subtract sensor noise before training stabilizing. Noise itself could be its own model with time and temperature and such (it’s nice to speculate when I don’t need to do the work 😁). Then training stabilizing on the non-noisy data. I’m not far off from you in AI knowledge.

It would be a tremendous amount of data, for what I’m going to suggest, but mostly it is just for the sake of curiosity. One can perform Laplacian edge detection to find out where edges are, and if it changes, then the image can be squeezed or stretched to put the edge back to where it was in the prior frame. There’s no reason one couldn’t examine a single pixel and watch it over time and consider that a “depth” of a three dimensional image; in that case you could edge detect vertically through the depth as well. Slices could be vertically stabilized if you know the image is a “still” image. I would bet that AI could do that to simultaneously compensate for both sensor noise and atmospheric distortion (treat it as a 3D structure instead of frames
although it would no longer be real time unless you limited it to maybe 30 frames deep).

I’m curious, have you ever tried to stack several frames and treat it as a 3D object?

1 Like

The combination of noise removal & stabilization brings artifacts.

Before, i was making stabilization on RAW image and it was ok. If i make the stabilization after treatments (noise removal and so on), i get artifacts. I must look at this problem.

I still want to understand what AI can bring to me but i guess i will need to find a book (french language) to understand how i can use AI to filter my images. For now, i get a better understanding of Chinese than AI !

When you speak about stacking and 3D object, what do you mean exactly ? I am not sure i understand well.

Alain