Electronically Assisted Astronomy with a Jetson Nano

Copernicus region :

Alain

2 Likes

Hello everyone,

some quick news about me and my project.

If everything goes well, i will get a new home by the end of the year. If all the steps are ok, i will get good conditions to make new sky survey tests. Just have to wait.

This will give me time to wait for new hardware but nothing is really sure for that. I still search for a new colour camera with IMX485 sensor and for now, i can’t find adapter to plug a Fuji X mount lens on a camera. Why Fuji X mount lens ? Because we can find F0.95 lens with affordable price and quite good quality.

So, this summer, i will have small time to work on my filters.

About filters, considering my tests on wide field or small field deep sky captures, i must say global amplification of the signal or band pass amplification can’t make miracles. Why ? Because of the heterogeneous of the back ground of the images.

So, i have to create a filter that will act considering the global illumination of the region the amplified pixel will be. It will be a kind of adaptative amplification and this amplification will depend of the back ground and the pixel value (using a transfer function to set the amplification factor). With this filter, i will be able to apply the good applification factor to each pixel of the picture. To resume, i will create an amplification map which will depend of the back ground and the pixels values. I hope this filter will give good results and will raise small signal in the pictures.

For sure, that kind of filter needs very clean picture (i mean no noise).

I will post here results as soon i get some.

Clear sky.

Alain

3 Likes

Hello,

i quick view to show impact of noise in high gain video capture with IMX178 sensor :

AADF brings heavy improvement but it is far from being perfect. But it will help a lot to retrieve real signal in order to reveal tiny stars.

Without noise reduction, it is clear it’s a kind of impossible mission to get informations about tiny stars.

Alain

1 Like

I am curious about something: In the past, for higher frame rates, you have mentioned it isn’t practical to subtract out a black screen exposure to reduce some noise. Would it be possible to create a series of shuttered images over various periods of time (and perhaps temperatures), and use AI training to estimate noise more dynamically than use of subtracting out a single long black screen noise image? If AI could predict the noise of a specific sensor, and perhaps you could use not just the standard noise reduction, but also a reduction specific to that exact sensor, I’d think that’d improve noise reduction. Call it a two-step noise reduction, once standard and not specific to the camera sensor, and once specific to the sensor.

Hello linuxdev,

maybe it could be possible to use AI to manage noise. With many pictures with great amount of different exposure time, gain and temperature, the model could learn specif noise response for a specific sensor. This could be great with a dynamic noise management, considering a change exposure time and gain when i make a sky capture. This would be more appropriate than classic “dark subtraction”.

But i must say i have no idea to do that !

Alain

An example of very primary approach for small signal amplification, trying to avoid noise amplification.

I used a profile from the image and its smooth version to try to apply the good amplification factor to each pixel.

Blue print : original pixel value
Red print : modified pixel value

Still some work to do.

Alain

Hello,

i improved my test routine to get a more efficiency transfer function to preserve dark signal and bright signal.

I have included the routine in JetsonSky (using Xavier NX and opencv in order to validate the routine).

Here is the first test on an old capture :

To be honest, the result is just amazing for me. The very small signal is not impacted (the noise stay low) and we can see with the filter many stars we were not able to see. It’s just great.

Still some work to improve the filter but it already rocks ! I wrote it using opencv and numpy in order to get easy programming. When the filter will be stabilized, i will write it using PyCuda in order to improve treatment time.

If i consider the huge improvements when i compare RAW video with the IMX178 sensor and the result we can get with appropriate treatment, it’s just amazing (in the video, the “RAW” part is an “already cleaned” video.

I am quite happy with those results. Really.

Alain

Hello,

I have reached all my main objectives (more or less) concerning live treatments so, except small improvements, I will wait for new hardware.

So, I will make a long brake.

Bye.

Alain

Alain,

I’m starting down the same path. I’m a long time EAA user and I’ve been thinking of using AI to help my EAA workflow for a while. My first question is how did you get the ZWO camera working in the Jetson Nano? As far as I know the ASI software isn’t ported to support the Jetson platform. See the thread where I asked here:

https://bbs.astronomy-imaging-camera.com/d/12630-nvidia-jetson-nanoxavier-software

1 Like

By the way, you might want to try telescope adapters site. I know they have adapters for Fuji x mount cameras, but I’m not sure if they have what you need to use such a lens.

https://www.telescopeadapters.com/165-fuji#

1 Like

Hello Curtis,

for sure we can use ZWO cameras with Jetson Nano/Xavier NX. I do.

You just need to pick up SDK for linux and use armV8 libraries.

This works when you want to write your own software for sure.

If you want to write your software with Python, you will need this :

https://bbs.astronomy-imaging-camera.com/d/6714-python-binding-for-zwo-asi-sdk-now-available

Concerning Fuji X adapter, i already have an adapter to plug my Fuji XT3 on my telescope. My problem is to plug a Fuji X lens on ZWO camera. ZWO is not interested with that kind of adapter. The problem with ZWO is we can’t expect much support from them.

Have a nice day.

Alain

1 Like