Electronically Assisted Astronomy with a Jetson Nano

I was not sure KStars was really ok with apt install method. I had some problems with the mount communication with KStars so I decided to build KStars.

I also made a firmware update of the mount. Finally, everything works but to be honest, I don’t know why (KStars build or mount driver update).

Alain

It’s time for a small outdoor test, even if the sky is not really good.

The main goal is to see if this system is easy to use and if there is no major issue.


Alain

Hello,

the test is over so here is a quick feedback.

Running JetsonSky with AGX Orin is ok. I will to make small adjustments but everything works fine. The AGX Orin acts at least as well as my laptop. Really impressing.

I was able to manage the mount with KStars but i have some small issues to solve. It is not a problem with AGX Orin. It is problems with KStars and the mount. Nothing really serious but it can take some time to solve those small issues (i must be in real conditions to see if things are ok or not).

Here is a video of some of my captures. I uses my 50mm focal lens. That is to say it is large field of view but not so large. To give you an idea of the interest of the use of a camera, i could only see about 50 stars with my eyes. With the camera, i am able to see very small stars (up to mag 12 !).

Now, i have some work to solve many small issues with JetsonSky and make KStars work properly with the mount !

Alain

I am curious about any kind of compression used. Do you start with a raw image format, or do you use something the camera provides in a compressed form?

1 Like

The camera gives me a RAW image. It’s better for treatment for sure.

When I make some post treatment with my software, very often, I make treatment with compressed format. The treatment is ok with compressed file but a bit worst than with raw format.

Is the compressed format lossy? Is there any kind of antialiasing used with the final image?

The compressed format is lossy if I select this format. I can also choose to use raw format for my video output.

When I chose compressed format, there is no anti-aliasing.

Curious about any kind of compression used.

Hello,

i made some quick bench comparisons and here are some results (with noise removal routines) :

3000x2000 pixels video acquisition :

KNN : Nano 220ms VS Xavier NX 75ms VS AGX Orin 35ms
NLM2 : Nano 627ms VS Xavier NX 200ms VS AGX Orin 68ms
AADFP : Nano 150ms VS Xavier NX 90ms VS AGX Orin 50ms

1600x1200 pixels video acquisition :

KNN : Nano 67ms VS Xavier NX 30ms VS AGX Orin 21ms
NLM2 : Nano 190ms VS Xavier NX 62ms VS AGX Orin 30ms
AADFP : Nano 48ms VS Xavier NX 25ms VS AGX Orin 17ms

For memory, on my laptop (i7 8750H + GTX1060) :
6K video (3000x2000) :
KNN : 38ms (AGX Orin 35ms)
Fast NLM2 : 53ms (AGX Orin 68ms)
AADFP : 99ms (AGX Orin 50ms)

1600x1200 video :
KNN : 15ms (AGX Orin 21ms)
Fast NLM2 : 21ms (AGX Orin 30ms)
AADFP : 31ms (AGX Orin 17ms)

I guess the difference is not that big because of the memory transfer for each picture. I have to see that point more deeply.

Alain

I have investigated with some OpenCV functions, mainly cv2.split and cv2.merge.

When processing colour video, i have to work with each channel (RGB), that is to say i need to split the image.

In that case, why not using cv2.split function ? Because this function cost many time !

I am an idiot. I bench “cv2.split + my function” instead of benching “my function”.

Now, i will see how i can remove cv2.split and try to work on memory optimization.

Sometime, i do stupid things.

Alain

I am stupid^2

Image = camera.capture_video_frame(filename=None)

I thought i could do this :
R = Image[:,:,0)
G = Image[:,:,1)
B = Image[:,:,2)

It works but … it doesn’t.

Still need to use cv2.split function to get the 3 channels and be able to make some treatments on them.

Anyway, this gave me the opportunity to clean my code. Now, it’s much better.

Comparing with my laptop, AGX Orin is very close. I thought it will be a bit faster but it does not.

Benchs are one thing, real use is something different. I must say my soft works more smoothly on the Orin than on my laptop. I guess there are 2 main reasons :

  • 12-core Arm® Cortex®-A78AE a’re better than the Intel i7 8750H
  • USB 3 AGX Orin is better than my laptop USB3

Alain

Hello,

for those who would like to test my own personal noise removal filter (AADF). It works a bit like a car shock absorber.

This filter works only with videos (because it needs at least 2 frames (N-1 and N) to compare them and manage each pixel variation.

You can find a test program here :

Alain

1 Like

Hello,

i made new test yesterday at night.

Jetson AGX Orin was used for camera setup/control and live treatments (no post treatments).

I used 3 different lens with the camera :

  • 8mm f1.0
  • 50mm f1.4
  • 135mm f2.5

The target was the Milky Way (looking to the north, north-east, east.

The sky was quite good. This is interesting because i was able to see things like NGC 7000 (the North America Nebula). It is really cool to start to see deep sky structures with colours (you can really see the colours with your naked eyes).

The video :

This test was really cool. It’s really great to see space objects with live video. Really impressing.

My lens are not open enough (f1.4 is not enough and f2.5 is far too much for live view with quite short exposure time).

I will try to find a solution to get something like 33 or 50mm focal length with f0.95 aperture. If i can get something like that i can plug on my camera, it will be terrific.

That’s too bad Jensen Huang do not have time to make astronomy. He could get a very good camera and a NIKKOR Z 58mm f/0.95 lens. With this equipment and a good sky, the result would be really incredible.

I would like to insist on 1 or 2 points :

  • for sure i have to push camera gain really high but you must really consider that without live treatments, you could never see so much details in the video and the signal would be really noisy (much more noisy)
  • this video is quite unique from what i know. You won’t see such video anywhere else (you will find for sure timelapse but with long exposure ; in my case, the exposure time is from 200 to 500ms max).

Alain

1 Like

Other examples from last night tests :

Alain

1 Like

Hey Alain,
thanks a lot for sharing. That‘s very kind.
Maybe sometime you will decide to make your cool stuff more public again.
Until this my eyes are glued to this thread.

Best Thomas.

Hello Thomas,

i will release the entire code in a few weeks. Still need to remove some bugs and solve some issues i have with the Jetson i don’t have with my laptop.

The only big problem is that my software needs ZWO camera to work. With picamera for example, i can’t set the parameters i need so there is no solution to make a software for more common cameras.

Alain

1 Like

Your work is so amazing that I would by a zwo for that without a doubt.
But this are amazing news, I do really appreciate that. You make my day.
Thanks a lot.

1 Like

I have one question, please:
Do you have a pi4 for buffering the video, or does the stream gets directly in the jetson?

The stream gets directly in the Jetson. I don’t use Rpi for my software. It’s a stand-alone system using only Jetson SBC.

Alain

Perfect. By the way the camera I already have :-))

Best

Thomas