Electronically Assisted Astronomy with a Jetson Nano

Copernicus region :

Alain

2 Likes

Hello everyone,

some quick news about me and my project.

If everything goes well, i will get a new home by the end of the year. If all the steps are ok, i will get good conditions to make new sky survey tests. Just have to wait.

This will give me time to wait for new hardware but nothing is really sure for that. I still search for a new colour camera with IMX485 sensor and for now, i can’t find adapter to plug a Fuji X mount lens on a camera. Why Fuji X mount lens ? Because we can find F0.95 lens with affordable price and quite good quality.

So, this summer, i will have small time to work on my filters.

About filters, considering my tests on wide field or small field deep sky captures, i must say global amplification of the signal or band pass amplification can’t make miracles. Why ? Because of the heterogeneous of the back ground of the images.

So, i have to create a filter that will act considering the global illumination of the region the amplified pixel will be. It will be a kind of adaptative amplification and this amplification will depend of the back ground and the pixel value (using a transfer function to set the amplification factor). With this filter, i will be able to apply the good applification factor to each pixel of the picture. To resume, i will create an amplification map which will depend of the back ground and the pixels values. I hope this filter will give good results and will raise small signal in the pictures.

For sure, that kind of filter needs very clean picture (i mean no noise).

I will post here results as soon i get some.

Clear sky.

Alain

3 Likes

Hello,

i quick view to show impact of noise in high gain video capture with IMX178 sensor :

AADF brings heavy improvement but it is far from being perfect. But it will help a lot to retrieve real signal in order to reveal tiny stars.

Without noise reduction, it is clear it’s a kind of impossible mission to get informations about tiny stars.

Alain

1 Like

I am curious about something: In the past, for higher frame rates, you have mentioned it isn’t practical to subtract out a black screen exposure to reduce some noise. Would it be possible to create a series of shuttered images over various periods of time (and perhaps temperatures), and use AI training to estimate noise more dynamically than use of subtracting out a single long black screen noise image? If AI could predict the noise of a specific sensor, and perhaps you could use not just the standard noise reduction, but also a reduction specific to that exact sensor, I’d think that’d improve noise reduction. Call it a two-step noise reduction, once standard and not specific to the camera sensor, and once specific to the sensor.

Hello linuxdev,

maybe it could be possible to use AI to manage noise. With many pictures with great amount of different exposure time, gain and temperature, the model could learn specif noise response for a specific sensor. This could be great with a dynamic noise management, considering a change exposure time and gain when i make a sky capture. This would be more appropriate than classic “dark subtraction”.

But i must say i have no idea to do that !

Alain

1 Like

An example of very primary approach for small signal amplification, trying to avoid noise amplification.

I used a profile from the image and its smooth version to try to apply the good amplification factor to each pixel.

Blue print : original pixel value
Red print : modified pixel value

Still some work to do.

Alain

1 Like

Hello,

i improved my test routine to get a more efficiency transfer function to preserve dark signal and bright signal.

I have included the routine in JetsonSky (using Xavier NX and opencv in order to validate the routine).

Here is the first test on an old capture :

To be honest, the result is just amazing for me. The very small signal is not impacted (the noise stay low) and we can see with the filter many stars we were not able to see. It’s just great.

Still some work to improve the filter but it already rocks ! I wrote it using opencv and numpy in order to get easy programming. When the filter will be stabilized, i will write it using PyCuda in order to improve treatment time.

If i consider the huge improvements when i compare RAW video with the IMX178 sensor and the result we can get with appropriate treatment, it’s just amazing (in the video, the “RAW” part is an “already cleaned” video.

I am quite happy with those results. Really.

Alain

1 Like

Hello,

I have reached all my main objectives (more or less) concerning live treatments so, except small improvements, I will wait for new hardware.

So, I will make a long brake.

Bye.

Alain

1 Like

Alain,

I’m starting down the same path. I’m a long time EAA user and I’ve been thinking of using AI to help my EAA workflow for a while. My first question is how did you get the ZWO camera working in the Jetson Nano? As far as I know the ASI software isn’t ported to support the Jetson platform. See the thread where I asked here:

https://bbs.astronomy-imaging-camera.com/d/12630-nvidia-jetson-nanoxavier-software

1 Like

By the way, you might want to try telescope adapters site. I know they have adapters for Fuji x mount cameras, but I’m not sure if they have what you need to use such a lens.

https://www.telescopeadapters.com/165-fuji#

1 Like

Hello Curtis,

for sure we can use ZWO cameras with Jetson Nano/Xavier NX. I do.

You just need to pick up SDK for linux and use armV8 libraries.

This works when you want to write your own software for sure.

If you want to write your software with Python, you will need this :

https://bbs.astronomy-imaging-camera.com/d/6714-python-binding-for-zwo-asi-sdk-now-available

Concerning Fuji X adapter, i already have an adapter to plug my Fuji XT3 on my telescope. My problem is to plug a Fuji X lens on ZWO camera. ZWO is not interested with that kind of adapter. The problem with ZWO is we can’t expect much support from them.

Have a nice day.

Alain

2 Likes

Hello,

i am still here but few time to spend on my project.

In the coming months, i will improve saturation enhancement code for the colours of the Moon, trying to get realtime treatment of the colours of the soils during captures.

Until now, i was using python routines (OpenCV, Pillow) but i am not very pleased with the results and the treatment time.

I plan to write the code using PyCuda. The main challenge is not to raise saturation. The challenge is to preserve details. This means i have to combine the saturated result image with the original image to preserve details (saturated image is really horrible). In that case, i hope that original image luminance will rule the final image. I have just written the theory, just have to transform the theory into code. Wait and see.

An other subject : by the end of the year or the beginning of 2022, i think i will get a new camera with better sensor. This will be really welcome.

Well, that’s all for now.

Clear sky !

Alain

2 Likes

Hello,

improving saturation enhancement was easier than i thought. The code is written and have be tested with success.

Here is a video which explain that only using saturation enhancement (1 filter) is far from being the only thing to do :

My test video is not really good for this exercise but it is the only one i get on my HD. Next year, i will get new captures so i will be able to make further tests.

The complete algorithm treatment is written with CUDA so the treatment time is quite fast. This allow me to make treatment during capture.

That’s all.

Bye.

Alain

1 Like

Still testing my saturation enhancement filter (preserving details).

This time, i used an old video i made some years ago.

It’s a little fly over the Moon using the Moon digital elevation model (DEM) and an old colour map of the Moon’s soils (superimposed on the 3D model).

The DEM can be find here if you are interested in (LRO and Kaguya satellites : tanks to NASA and JAXA) :

https://astrogeology.usgs.gov/search/map/Moon/LRO/LOLA/Lunar_LRO_LOLAKaguya_DEMmerge_60N60S_512ppd

I used Spaceyes3D software to make the fly over (demo version of the software because it is a bit expansive).

The result is interesting and my filter is OK.

Alain

3 Likes

Hello,

new test yesterday, with very poor sky (very bad transparency). Many satellites in the sky. Still using Jetson Xavier NX for live treatment (as the exposure time is about 500ms, no problem to make live treatment).

I still hope getting a new camera by the end of the year (with Sony IMX485 colour sensor). Still looking for a better lens.

Have a nice day.

Alain

A new test with comparison RAW video VS treated video.

Alain

2 Likes

Hello,

things are going the good way.

Today, it’s my birthday and I have just seen the camera I was waiting for is born.

So, in a few months ( no time left for now), I will have a new Sony IMX485 colour sensor camera. I really hope it will bring some real improvements, considering the RAW video must be as good as possible in order to get really good treatment with my software.

The end of the year will be interesting.

Alain

4 Likes

Happy birthday! Looking forward to some of the newer photos and processing.

Joyeux anniversaire Alain !
Fais nous rêver encore avec ton nouveau joujou ;-)

Many thanks linuxdev !

Merci Honey_pastouceul. On va essayer. Quel plaisir d’écrire en français sur un forum US !

Alain

2 Likes