Electronically Assisted Astronomy with a Jetson Nano

il will investigate and make some tests.

I will also check if i have the latest camera SDK. That kind of issue is not easy to solve.

I will see how Python manages the threads.

Many thanks for your help.



New versions released V20_05RC added (release candidate) in Github. It’s test versions.

3 softwares :

  • JetsonSky : needs ZWO camera - Acquisition and live treatments
  • Jetson Videos : dedicated to video treatments (load an existing video and apply treatments). No camera management
  • Jetson Images : dedicated to image treatments (load an existing image and apply treatments). No camera management

Video and Image version software is more user friendly (you can directly load and save Images/Videos)

The 3 software get now relative path. This means you don’t need any more to set your entire path in the code. You just have to respect this directories architecture :







As it is RC software, i guess there are some bugs inside.

GStreamer is not used in the 20_05RC versions. I need to find the good setting for GStreamer before using it.


I used latest zwo SDK but it brings no improvement.

I looked at the PID with htop but it is impossible to find the good process.

I changed my timeout management with the camera and things are better but I still have few read error ( but less than before).

It must be a very complex issue because the image acquisition works more than 99% of time. I think it is a problem with Jetson AGX. Or not.


There might be some option to “pstree” (e.g., “pstree -p -T | less”, but it also has sorting and search options) to narrow it down to the root of the python command. Or “pidof <name of program>”. It is difficult to directly profile something of this nature, but if you can get the PID and increase its priority (decrease the niceness), then you could get an idea of whether or not the CPU time is just not enough.

I will take a look at pstree and pidof. They can always be useful !

Thx linuxdev.


The use of the GPU for fast treatments is interesting. Many libraries use GPU capabilities now.

I use some of them but we can get some issues when trying to use several libraries (GPU context problem).

For example, if you want to use pycuda, cupy and encode a video with GStreamer and a Nvidia codec, you may have GPU context problem.

To solve this GPU context issue, you can try this (example) :

import cupy
import pycuda.driver as drv
import pycuda.autoprimaryctx

pycuda_ctx = drv.Device(0).retain_primary_context()

def my_function_calling_pycuda_routine()
    ... your code calling pycuda routine

def my_function_using_cupy()
    ... your code using cupy functions

main program
... my_function_calling_pycuda_routine()
... use GStreamer with nvidia encode codec
... my_function_using_cupy()
... my_function_calling_pycuda_routine()
... use GStreamer with nvidia encode codec

It’s a crappy example but maybe it could help.



i have released a new version of JetsonSky (camera management) : V 20_08 beta

I have a better frame management
Added FPS in TIP (Text in Picture)
GStreamer hardware encoding for compressed colour video enabled.

With better frame management (i get a better synchronization between image acquisition and treatment), i realize Jetson AGX Orin gives better results than my windows laptop).

For example, i get about 10 FPS with a 4K video on my laptop and about 14 FPS with the AGX Orin.

V20_08 is a beta version because i did not test it in real conditions.



i have uploaded a new V20_09Beta version of JetsonSky. Some small improvements.

Many thanks to Honey_Patouceul.

I have also uploaded latests version of ZWO SDK for linux (arm V8).



at last, i start to work with Pytorch, especially TorchVision. First, i use contrast adjustment to improve large field deep sky video contrast.

It works quite fine, despite a longer treatment time. I need to test it on the AGX Orin to see if AGX Orin GPU is better than my GTX1060 laptop GPU.


Bad weather here so impossible to make deep sky video tests with JetsonSky.

I also have an image treatment version of JetsonSky and here is an example of treatment i made quickly with Jupiter (Galileo probe) :

I will make other tests on different targets if i can find hires pictures.


1 Like

That’s astonishing! Makes me wish I had the money to print large art and frame it and hang it!

Hello linuxdev,

I will search for beautiful hires pictures from NASA and try some treatments. If the result is fine, I will send you the images.


Thanks, I wish I had my own James Webb telescope after looking at your thread…

JWT is a bit expensive ! A Celestron C14HD would be also cool and cheaper.

I made other tests with JetsonSky images version. The results are interesting (at least, the results please me).


1 Like

Boah man, I love these images. Who is the real magician here???
Maybe both of you, but Alain you’re my favorite.
Look at this Europa-Image, awesome.


I really like those processed images. @Thommy78 has the right word: Magician. Makes me wish I had one of the million dollar printers I used to code drivers for. That’s true art.

Many thanks Thommy. For now, the sky is bad some i try to do new things.


Hello linuxdev,

thanks, i am really happy you enjoy the pictures.

I sent you a PM.


Bad weather so i write new things for JetsonSky.

Until now, PyCuda is the fastest library for live treatment but we have to write the algorithm (a bit more complex).

Cupy is faster than numpy (about 2x faster) but it is useful when you have many cupy instructions in a function.

OpenCV without Cuda is very fast (PyCuda gives better results). OpenCV with Cuda is useless when you use very few opencv Cuda functions.

TorchVision is interesting because of the huge amount of functions and because it is very simple to use. But PyCuda is from 2 to 4 time faster than TorchVision functions. Any way, i included some TorchVision functions in JetsonSky (contrast, gamma, saturation).

Torchvision is very easy to use. For example (modify contrast of a numpy image) :

import PIL.Image
import torchvision.transforms.functional as F
#import torchvision.transforms.v2.functional as F # if you want to import v2 version of torchvision transform

image_PIL = PIL.Image.fromarray(numpy_img)
image_PIL_contrat = F.adjust_contrast(image_PIL,val_contrast)
numpy_img = np.asarray(image_PIL_contrat)

VPI is really cool and quite fast (but PyCuda is still faster). I hope this library will grow up. For now, Windows dos not support VPI (i hope it will).

So :
PyCuda is a must have (but needs coding)
VPI is very good (but few functions and no Windows support)
cupy is fast and gets many functions (but no image treatment functions)
opencv gets many functions and is quite fast
opencv+cuda is useless in most cases of use
TorchVision gets many functions but it could be faster

For high frame rate, pycuda + cupy + VPI is ok. If you miss some functions, use opencv or Torchvision (but only if you can’t write yourself the function with pycuda or cupy.


I have successfully merged JetsonSky camera and JetsonSky video treatment only. It will be more simple to manage only 1 program. The image treatment version of JetsonSky is more stable and i don’t bring improvements very often to this version of JetsonSky.

I will make some tests to see if the new JetsonSky is stable before releasing it on Github. I also have to add some test conditions with the libraries use in order to avoid errors if a library is not loaded.


1 Like