Electronically Assisted Astronomy with a Jetson Nano

Hello,

i have just installed PyOpenCL on the Jetson Nano.

Quite simple : sudo apt-get install python3-pyopencl

I made a very first test (basic arithmetic operation on 2 huge numpy arrays).

The result :
Jetson Nano + pyopencl : 0.20 seconds
Jetson Nano + pure Numpy : 0.75 seconds

Acceleration factor is about 3.7x

It’s interesting because i previously made tests with pycuda and in that case, pure numpy basic operations are faster than pycuda routines (most of time).

This is really cool because for very basic image filtering, i used numpy because it is faster than pycuda. Now, i will use pyopencl for basic filters and i will get faster results (about 3 times faster i guess) than numpy results.

So, complex filters with PyCuda and basic filters with PyOpenCL.

Alain

Hello,

i made further tests with PyOpenCL and i must say real life use is really disappointing.

Basic colour image treatment is about 10 times slower than OpenCV or Numpy routines.

Maybe i missed something but i don’t think so.

Anyway, it’s important to test several software solutions in order to keep the best solution.

For complex routines, PyCuda is still a must have and for very basic routines, OpenCV and Numpy are really great.

Alain

Hi,

for now, i think i have made enough tests to validate my technical choices for my acquisition/treatment software.

So, SkyNano (or SkyPC) has reached V1.

External Media

Python is a good choice for that kind of software, even if it is not the fastest language. With opencv and pycuda, i must say result is very good. OpenCL is not interesting at all so i will now only focus my work on pycuda.

As Jetson Nano and laptop computer (under Windows 10 with Nvidia GPU) can manage Python and Cuda, my software can be used with those 2 platforms. It is really interesting.

Now, SkyNano is fully functional and it can be use as it is (still need to work on remote control). Il will be able to make filters tests with this platform.

I will also try to rewrite my software using C++ to get faster software but this will take some time.

I still want to thanks Nvidia for the Jetson Nano. It’s a really good SBC and all work i do with it can be used with my laptop. Really great.

Alain

Some quick considerations about low light acquisition.

Modern sensors brings very interesting results (quite low noise, high sensitivity). But we have 2 main problems :

  • very good sensors are really expansive
  • hardware improvements take some time

Even if a sensor is good, i think sensor result (raw format) must be improved with software treatments, as we can’t buy newest sensors each time they reach the market.

Here is a small example of signal enhancement with appropriate real time filtering. The sensor setup is the same but in the upper image, there is not filtering and in the lower image, several filters are applied (real time) to reveal small sensor signal.

External Media

The treatment is applied with my Jetson Nano which is powerful enough to apply several treatments to a 6 Mpx video (Sony IMX178 colour sensor).

This technique can be applied to very low light observation. Of course, sky survey at night is a very good candidate for such treatment.

Alain

An other example with a cleaner result.

This time, exposure time is longer and camera gain is much smaller. IMX178 sensor is rather good but its photosites are very small (2.4µm) and rising gain generate heavy noise.

External Media

Real time treatments bring many informations that were hidden if the non treated image.

Alain

Hello,

i have tried something a bit different with the Jetson Nano and my sky survey video : automatic star and satellite detection.

Here is a very first attempt :

I used PyCuda to make the post treatment.

Well, it’s a very first try. I need to make many improvements to get something usable but the subject is really interesting.

Maxwell GPU is really welcome because that kind of treatment needs power. But i think all the treatments won’t be done with the GPU because it’s really hard to get parallel treatment with blocks and threads when and object can be places between to image zone.

So, i think i will need both GPU and CPU to make this function.

Alain

Hello,

still working on my automatic stars ans satellites detection routine.

Applying a threshold filter to try to get bright stars or satellites is interesting (but not that much easy to make because raw video may be very noisy with bad quality) but you will get informations in the image and that kind of information can’t be use to calculate something if you do not go further.

To be able to make some calculations, you will have to find coordinates of the objects (and other things like relative size, form factor etc.). This means i have to find objects in my images.

Here is a video showing different kind of treatments.

Upper left : the raw video
Upper right : the raw video with the threshold result (but no X,Y coordinates !)
Lower left : the treated video (cleaned images) with this time the objects detection (circles with X,Y coordinates and size) ; this time i can make calculations with the objects !
Lower right : the raw video with the detected objects.

I still need to work to get robust routine but i am quite happy with the result.

Post treatment was made using Nvidia #JetsonNano with Python, PyCuda, numpy and OpenCV. I did not use some AI routines to create my routine.

Well, that’s all for now. I am very happy with the result. Maybe someone could think that kind of work does not need heavy routines but in fact, i had to apply many filters (clean image, enhance bright dots, remove background gradient, remove noise, apply appropriate threshold, collect objects, find size, remove artefacts etc.)… Quite many work in fact.

The video resolution is 1544x1040 pixels (that is to say about 1.5 Mp). I must say Jetson Nano can manage it (as sky survey video means long exposure time most of time) but it starts to be a bit weak. Anyway, he can make the job.

Alain

I am curious, what are the characteristics of the objects you wish to detect which are most difficult to detect? Are you looking to identify all stars, even very dim stars, or are you looking specifically for moving objects like satellites?

Hello linuxdev,

I mainly want to detect bright stars and maybe medium stars. It will depend of acquisition settings.

If i want low exposure time, i will get noisy image because camera gain will bebvery high. In that case, very dim stars will be very hard to distinguish from noise.

If i use long exposure time, dim stars will be reachable.

Satellites and planes are top priority targets. They are usually bright. Some satellites are hard to catch because they are very low light.

For now, i try to get a usable detection routine. When it will be ok, i will try to make it more efficient whatever the acquisition setting will be. Then, i will try to improve the speed of the detection.

Alain

How accurate is your aiming control system for the telescope? Using your 1.5Mp and cone of view, if you were to try to hold steady on one location in the sky, would you be able to aim with accuracy/resolution enough to know the aiming mechanism deviates significantly less than a single pixel?

The reason I ask is because if you can aim that well, then it means you could purposely introduce an intentional calibrated/known “wobble” of around 1/4 pixel (similar to antialiasing, but done mechanically instead of electronically…because the movement is less than a pixel the bulk of the light will always hit the original pixel, and a smaller portion will hit the surrounding pixels). Everything which wobbles correctly to that calibration is what you really want (the wobbling is calibrated, the software knows exactly what the current wobble is); everything not wobbling is noise. Seeing the wobble go from circular to elliptical would imply motion. Machine learning would be very well adapted to removing “non-wobbling” noise while keeping actual content, and would also be good at identifying elliptical distortions in the circular wobble.

PS: Sorry, “wobble” is a silly word for technical use, but it does describe things intuitively.

With my telescope mount, i can manage very small moves. But i only make traditional astrophotography with my telescope because is focal divide per diameter ratio is much to high to make real time video of deep sky.

My small system ( the one i describe in this topic) is not accurate enough to test your proposal. But as i want to use it for video observation of the sky, i don’t need to much accurate system.

My system is still a beta version i use to test hardware and software solutions. There are plenty of things to do with such system. Getting a good idea take some time (astrophotography experience is really useful to know what is interesting to do), knowing how to do it needs more time, make it needs even more time and test/debugging…

Alain

Hello,

an interesting video showing impact of light pollution on sky survey video.

The setup is the same for the 2 videos (Jetson Nano, IMX178 colour sensor, CCTV 8.5mm f/d 1.5 lens) but :

  • the left video was taken in french Hautes Pyrénées (mountains area)
  • the right video was taken near Dijon (France) that is to say city light pollution

Light pollution is really hard to manage.

We can use optical filters but we loose important part of sky signal.

Software treatments can bring things but it is not magical at all. Captures conditions for the sensor are a bit extreme and it is really hard to get very good real time software treatments with sky survey. Tiny sky signal is really killed by light pollution. Best software treatment can’t find information when you have lost it.

Depending of the sky, we have to find the best acquisition system (lens, camera sensor, software). Some solutions will work with specific conditions, some won’t.

It would be really interesting if Nvidia could collaborate with a sensor builder (like Sony ?) to try to improve sensor treatments in particular difficult conditions.

Alain

Just an idea on light pollution. If you do the reverse of finding the points in the image which are considered bright enough to be of interest, and instead use AI to find the darkest points which are between points of interest (and interpolation between dark points), perhaps then you would have a map of estimated light pollution. Subtracting that out would leave you with a less polluted image (presumably everything would be darkened by exactly the amount of light pollution).

I know, if sensor hardware had this directly built in it would be superior, but once you have CUDA to work with there isn’t much cost to having AI learn to find darkest points between points of actual light.

Hello linuxdev,

if i understand well, you say i have to remove back ground light pollution and keep brightest stars.

I already did that with a gradient remover filter. I subtract gradient information to the image. It works quite well (i have to improve my filter).

In the light polluted video, i have already subtract the gradient (most of it). The test capture without gradient removal is horrible (bright white/orange sky).

The treated video is good enough to see satellites, planes, meteors etc. but the small stars and deep sky objects are drowned in the gradient and it is impossible to retrieve the signal with software solution.

I can use astronomy filters which are suppose to remove light pollution (like UHC, UHC-S and CLS filters). In that case, filters remove sodium wavelength for example and light pollution is much lower but i loose a lot of signal. I have to raise sensor gain very high and increase acquisition time and i get very very noisy image.

If sky is bad, the sensor is the key point. For now, my camera is not good enough and i would need a new sensor. Unfortunately, a good camera is about 900-1200$ and i have to find the money to buy one.

Maybe one day.

Alain

Hello,

the weather is bad so hard to make new tests with the Jetson Nano.

Weather is a problem but it seems i have biggest problem : my colour camera is starting to die (i have strange issues with frame transfer ; maybe camera USB port is starting to die). If the camera die, i will have to stop my work until i can get a new one but it will take time because a good camera is quite expensive.

Anyway, i made a treatment test with an old capture of Jupiter.

I can apply post treatment to a video (denoise, sharpen, contrast etc.) before making traditional treatment (frame sorting, stacking, wavelets etc.).

Here is a comparison of standard treatment and Jetson Nano post treatment + standard treatment.

Post treatment bring things but hard to say which one is better.

External Media

Post treatment can bring things if and only if the raw capture is very good.

Post treatment was applied to the raw video with the Jetson Nano and my software (python + PyCuda and opencv routines).

That’s all.

Clear sky !

Alain

Hello,

the weather is still bad (maybe this week, it will get better shortly).

So, i still work on video pre treatment.

First, the Moon :

Jetson Nano can apply filters to RAW video capture in order to improve video quality.

Basic Moon treatment is always the same :

  • capture a RAW video of the Moon, trying to get the best raw video result (it’s an equilibrium between light and noise), hoping the turbulence wil be low.
  • sort the frames considering the quality of each frame (sharp or blurry)
  • stack the best frames
  • apply gamma correction and wavelets.

The best the RAW video will be, the best the final result will be. If the RAW video is not good, no chance to get a good final result.

Pre processing with the Jetson Nano (using OpenCV and Cuda routines) can bring significant enhancement of the final result.

In the following examples, pre processing was gamma adjustment, contrast enhancement, denoise and sharpen :

External Media

External Media

External Media

Then, the deep sky :

This time, real time video of the deep sky must be as good as possible because there is not post processing like we have with planetary or Moon picture.

Real time video needs small exposure time, that is to say we have to raise sensor gain to get some signal. Unfortunately, this generate heavy noise and depending of your location, you will have some light pollution.

In that case, we need real time treatment to remove noise and light pollution but we need to save sky signal.

I still work on personal denoising routine and un use my personal gradient removal (light pollution removal) in this example :

That’s all.

Clear sky.

Alain

hello Alain
it is possible to have your programme SkyNano ?
you do a perfect job, i want try with my telescope and with my camera 4K
thx

Hello Joe,

many thanks for your comment.

It’s hard for me to share my program because :

  • it is the worst user friendly software in the world
  • it only works with ASI178MC or ASI178MM camera (specific SDK and Python wrapper)
  • there is some confidential routines in the program.

I plan to make a C version of my program (more easy to share) but i need time to make an exe version of this software.

So, for now, i can’t share it.

Really sorry.

Alain

Hello,

I still work of real time filtering for astronomical pictures or videos and this time, i think i get something.

If we forget light pollution of the sky, the main problem is the noise.

So, working on denoise filter is really inportant.

I have made a denoise algorithm i was able to use with PyCuda.

Until now, i used KNN and Fast NLM filters (many thanks Nvidia). Those filters are great but i think they remove to much signal.

Astronomical photo is the quest for signal. Even if the final result is really clean if you loose too much signal, you missed something.

My filter removes a lot of noise but it keeps much more signal and it preserve details.

Here are 2 examples of treatments results :

right click → show image

External Media

External Media

If the objective is to remove noise and preserve as much signal as possible, i think my filter is the best.

Concerning filter speed, it is quite good.

For a 6.4 mega pixels colour picture using Python and PyCuda :
my filter : 61ms
KNN filter : 69ms
Fast NLM : 112ms
OpenCV (CPU) : 760ms

For a 1.9 mega pixels colour picture using Python and PyCuda :
my filter : 23ms
KNN filter : 25ms
Fast NLM : 39ms
OpenCV (CPU) : 370ms

My filter is a bit faster than KNN filter (which is already very fast algorithm).

So, i am quite happy with that result.

I must say my filter is made for astronomical pictures, not for daylight pictures. It could work with daylight pictures but with some changes.

Dusty_nv, if you want to buy my filter, i am ok !

Alain

Error.

Alain