Electronically Assisted Astronomy with a Jetson Nano

Maybe i have misunderstood your question.

For this example with Jupiter, my laptop (i9-13980H & RTX4080) perform about 35 fps for post processing.

When i make live treatment with camera capture with 1260x960 resolution, i can manage about 55 fps including capture and treatment with an IMX178MC sensor with USB 3. That’s quite fast.

Alain

No, you had this correct. As fast as I saw the “movment” in the “RAW turbulence”, it looked like maybe it was time lapsed. For example, maybe that was taken over 10 minutes and then sped up to 1 minute. It sounds like this is the way the planet was actually appearing in real time.

The capture framerate of Jupiter was about 60 fps from what i can remember. The result video is 25 fps. That is to say realtime turbulence in the atmosphere was faster than what we see in the result video.

That’s a lot of turbulence! I didn’t expect it to be that strong.

Yes, turbulence can be really high. Sometimes, it’s quite low frequencies, sometimes quite high.

Nights with low turbulence are rare. It seems global warming increases turbulence. Many astronomers noticed that turbulence is higher now than decades ago.

Despite the fact we will probably have many serious problems with global warming, turbulence are a real problem for astronomers but for sure, it is a less critical problem !

Alain

Hello,

i will stop JetsonSky development for some weeks because i am a bit tired, no free time to work on the software and many things to do at my job.

I will get back ASAP.

Clear sky for all.

Alain

1 Like

Hello,

JetsonSky development is stopped but i had to solve a major problem with the solution i had chosen for JetsonSky (no thread).

In fact, with no thread (i used one for camera image acquisition), the program is sequential only. This means i make the acquisition and then, i can make treatment and again and again. The acquisition time is wasted. It is not a problem with small exposure time but it starts to be a problem when exposure time is significant (more than 100ms for example).

So, i had to change JetsonSky code to have the camera acquisition in a thread. Now, it’s better.

Always at the same place with V53_01RC version :

Alain

Hello,

a test yesterday, targeting Andromeda galaxy M31.

Treatment was applied during capture.

I used 16 bits capture, noise management and HDR (using a single 16 bits capture).

The result is not bad.

Alain

1 Like

The difference in noise is so obvious between 8-bit and 16-bit. I have a feeling that the sensor itself is the biggest problem, although it could be the A/D converter doesn’t “latch” in a stable way on the analog signal when running faster. There is supposed to be a sample and hold, then the conversion, and I suspect that it doesn’t matter if the error is in the sensor cell or in the sample and hold. If lower sensor temperature helps, then it would have to be the sensor side; otherwise it is probably the sample and hold running at either finer resolution or higher clock rate. Is the scan/clock rate the same between 8-bit and 16-bit? Is the sample and hold part of the sensor?

Hi linuxdev,

i am not really aware with the sensor and the ADC but we have the sensor noise and the read noise. We also have to add the ampglow effect which brings light were there is no light. And temperature is also a problem.

I guess 16b noise is different from 8b noise because of the ADC response and characteristics but i couldn’t tell which one is better.

In the video, it is a bit partial because i use heavy noise management with 16b mode. From what i remember, i did not the same with 8b mode because it was only to show the signal strength difference between the 2 modes.

8 bits could be cleaner with the noise removal filters but i should consider that i would need maximum gain (or much longer exposure time which could be a problem for a video). At maximum gain, even with my noise filters, i can’t manage such gain.

Something really important is the HDR mode with a single 16b exposure. HDR gives … HDR images (for sure) but it is also interesting with noise.

Honestly, JetsonSky makes miracles with M31 capture considering the quite poor quality of the sensor (i mean the sensor & the ADC) and the lens i used (135mm with quite high F/D 2.5). With a 8b capture, very high gaim and small exposure time (between 500 and 1000ms), M31 does not looks like my M31. More important, M110 is clearly visible (hard to see it with 8bits mode).

In a perfect world for my specific purpose, i would need a sCMOS sensor and a wide opened lens with excellent optics to get something really good.

Cooled sensor would be a great help but the price is a bit high (for example, a cooled camera with a IMX533MC sensor cost more than 1000€).

Concerning higher clock rate, it’s also a parameter to consider. Higher clock rate means more noise. With my camera, i can choose 2 capture modes : slow and fast. Fast brings lower image quality and higher noise. My sensor is a rolling shutter sensor which is better for noise. A global shutter sensor is interesting but it brings more noise.

Considering my actual equipment, it will be difficult for me to get much better result with M31. If the setup is the same, 2 things can be improved :

  • JetsonSky filters (but for now, i don’t have time to work on it)
  • the Sky quality (but it will be hard for me to improve this parameter).

Concerning the setup, it’s always the same problem : the money.

Very good lens will always be expensive. I tried some cheap chineses lens but the quality is not good enough.

Sensors evolution is not really amazing those years. Quite few new sensors with no significant improvements.

Just have to wait.

Alain

Reference scenes would seem to be useful. The long exposure with a closed lens would be one of those, but if you imagine a closed room with controlled lighting (which is highly calibrated in both spectrum and intensity at every part of the spectrum), perhaps with different colors typical of starlight, then it would be fascinating to see exactly what noise shows up. The black screen (closed lens) is the only one practical to test though.

Have you tried a long exposure with a closed lens cap at both 8-bit and 16-bit? You’d be keeping the scene constant. I’m rather curious as to how the two differ. Sadly, I don’t know how one would try different A/D converters (I have no idea where the A/D is at on this). Does 8-bit and HDR comparison reveal anything interesting with the closed lens cap?

The rolling shutter adds something interesting to test: The closed lens cap test at 16-bit would in theory have no detriment between global and rolling shutter, but if for some reason there is a difference when the lens cap is closed, then it might point to something about the noise source.

Sorry, I can’t help but wonder and am thinking out loud. I don’t know enough about the sensors, but when I see pictures it makes my brain ask questions. There are so many different types of noise, and if we know which noise is the issue, then maybe there is some way to work around it.

Hi Alain,

Jim from Jetsonhacks pointed me to this location, so I started reading cross your interesting project. Congratulations, from Pascal programming in the early days you made a long journey on the Jetson Path!

EAA did stake-out to me, because my interest is more about the position of a specific star or astro object.
Sailors without a GPS are interested in the exact position of an easily detectable astro object, like the top edge of the moon or a bright shining star.

Now my question: How do you identify the rough position of an astro object? Sailors uns an ‘almanac’, there is a calculation model, but over decades it lacks of accuracy, so constantly this model becomes less precise.
So how do you ‘find’ your stars and maybe we could use a Jetson Nano with good enough lens to shoot a star like sailors with their sextant?

Best,
-Pete

Hi linuxdev,

i did some tests but with fully closed lens to see the noise response with 8b & 16b captures.

Depending of the camera gain, i must say the noise is very small and we can see clearly 2 kinds of noise :

  • pixel noise
  • banding noise, that is to say we can see bands (horizontal bands). This banding is not stable in time and changes all the time.

I can see this noise very clearly when i amplify strongly the signal. The problem is that noise is small but i amplify it a lot. Even if the noise is small, camera amplification & JetsonSky amplification gives me noise in the result image.

I tried to capture darks and subtract this dark to the image but this won’t give significant improvement. The residual noise amplification is this significant.

It is really interesting to see that the classical capture (camera gain not too high and quite small exposure) without amplification is rather clean. But with this classical exposure mode, you see nothing on the screen. The signal is hidden in first bits and signal amplification brings residual noise (pixel noise, banding noise, ampglow) at high level.

I still have to work on noise management, even if my filters bring huge improvements (1 get 3 different noise filters i can combine in a multiple way including front apply and back apply). A clean solution would be the use of sCMOS camera but to do this, i would need to sell an eye (and i don’t want to).

Hi pitfo,

for now, i can identify my targets with my motorized mount. It’s a AZ-GTi mount and i get the coordinates of the mount using the mount software. I also use Stellarium software to get the mount coordinates in this software. This allow me to know exactly the object i look at.

I could also use plate solving to identify the objects i look at. Plate solving calculate the image coordinates considering a reference map of the sky. You can search about astrometry.net on the internet.

I don’t use that method because it does not fit my needs for now but maybe one day i will.

There Python libraries like Astropy which give useful tools but for now, i do not use them.

Today, coordinates calculus of sky objects is quite precise and some software solutions exist. For JetsonSky, i do not need such calculus (in fact, i do but i don’t really use them for now) because i focus on video treatments. When videos treatments will be ok, maybe i will focus on plate solving and objects identification.

Alain

I doubt this would help, but if you could identify a single pixel (or cluster of pixels) which changes significantly in a closed lens cap over time, and map the data, then this could be put there a Fourier decomposition. This would tell us about any kind of repeating changes (combinations of sines and cosines…basically a wheel that is turning, and not necessarily perfectly round). If this is found to have some sort of pattern, then each pixel could have the changes at those signal rates removed. Basically what a sound equalizer does, but using light (think of increasing sound level for frequencies you like, and removing the ones you don’t like).

The reason I don’t think this will help is because most true noise (e.g., thermal “popcorn”) will be completely missing any kind of cyclical change. However, if there is something discovered, then a GPU is quite good at working with real time Fourier transforms (it’s the ultimate equalizer). There is also the reverse: If the noise is random, but the locations actually receiving light can be described using a Fourier decomposition, then the light can still be separated from the noise by recomposing the parts which are not noise (those parts fitting a certain spectrum of sines and cosines).

There are already a lot of libraries out there which do Fourier series (decomposition and composition…the two ways of breaking down or synthesizing things from sines and cosines).

You are right linuxdev.

A noise removal filter using FFT exist. It performs 2D FFT on an image, remove some parts of the FFT parameters and reconstruct the noise free image. I did not try it yet but I have to.

Some months ago, I thought about a software which performs deep analysis of the camera sensor & ADC (considering gain and exposure variations) to create a correction map I would apply to the captures.

If I have time, I will try to make something like that in order to get cleaner captures.

You should buy a camera one day and make some experiments with it !

Alain

Thank you, Alain!

Your AZ-GTi mount is very smart!
I’ll start checking with astrometry.net.

Happy astro trails,
-Pete

I would love to, but the same old story: No money! 😒

I understand. Not easy.

Maybe one day you will find a 2nd hand camera not too expensive with a small CCTV lens which could allow you to make some interesting experiments !

I really hope so.

Alain

Lots of cheap cameras, but I think the really hard part is to have a tracking mount. The photos from something with such a short exposure that a motorized mount is not required are typically not very interesting. But you are right that there are experiments possible; unfortunately, if I can’t see nice pictures and am only researching noise correction, then it doesn’t have enough motivation to actually try.