Electronically Assisted Astronomy with a Jetson Nano

Hello,

i have decided to create this topic to show and explain my project.

First, i would like to apologize for my poor English (i am French).

So, Electronically Assisted Astronomy (EAA) is quite different from traditional astrophotography.

With EAA, camera replace your eye. That is to say you look at the sky with a camera instead of your eye and the purpose is not to make a beautiful picture of the sky (but it could be)but it is to simply observe the deep sky (watching sky on a screen or even make a video of your observations).

In July 2018, i decided to create my own EAA system. I wanted to make a full autonomous system i could control fro a laptop PC from my home (using VNC client) instead of being outside during very cold nights.

My EAA system is quite simple :

  • a 2 axis mount with stepper motors
  • an absolute gyroscope/compass to know where my camera looks at
  • a planetary camera
  • a lens

I made a prototype to test mechanical system, electronic, computer (SBC) and software solutions.

For my first try, i decided to use a Raspberry pi 3 B+ to control the mount and the camera. Concerning the software, i have chosen Python because :

  • the last time i wrote a program, it was in 1994 and i used Turbo Pascal language so my software knowledge is not really up to date !
  • Python is quite simple language and i wanted to get results quite quickly

So, i wrote 2 programs :

  • the first program can control the camera, apply real time filters on the capture frames and save videos
  • the second program controls the mount, allow manual control of the mount with a joystick and get informations from the gyroscope/compass.

It took me about 5 months to write the programs and create the whole system.

Here is what my capture software looks like :

And here is what my system looks like :

The system is inspired from the 2 meters telescope on the Pic du Midi in France :

  • Up, we have the mount (motors), the optic (an old but good Canon FD 50mm F/D 1.4 and the camera,
  • below, we have electronic and computer
  • on the basement, we have the power stage.

Here is the system with its case :

The system can communicate with my laptop PC through WIFI or Broadband over power line.

Don’t forget this is a prototype. I have learned a million things building this prototype and i am planing to make a new version, more strong, precise and i hope definitive version.

But for the coming months, software is my priority.

Why real time filtering of the capture frames ?

Planetary camera does quite well the job but as deep sky objects are not really bright objects, we have to rise the camera gain quite high (to avoid long exposure time ; EAA needs small exposure time) and therefore, you have very bad and noisy image.

We also have to manage light pollution to get a sky as dark as possible but we have to preserve deep sky objects light.

Of course, i could make a post capture treatments but it is not the EAA philosophy. EAA means real time visualisation so filters must be real time filters.

Raspberry pi 3 B+ realize real time filtering to improve video quality.

The camera is a ZWO ASI178MC using Sony IMX178 colour sensor. It’s a very good planetary camera.

I am in touch with ZWO to try to find a better camera. As they are interested with my project, maybe they will be ok to provide me a new camera with better sensor to try to improve this EAA system.

You can see some examples here :

There is still some improvements to make but the results are interesting.

Real time filtering needs very good CPU and raspberry is not the best for that kind of work. So i decided to get a more powerful SBC.

Some SBC get much better CPU like Odroid N2 which get BIG.little CPU. I tested one and the result is much better than raspberry pi but it there are still so problems :

  • the Mali GPU can’t be used and all the work is made by the CPU !
  • the software support is really really weak.

So, i decided to ask Nvidia if they could support my project and they said YES. THat’s really really cool.

So, with Jetson Nano, i will try to get the Maxwell GPU working in order to get much better real time treatments.

This is really interesting for my project. But i will have very heavy work to do because :

  • i will probably have to rewrite my acquisition software (camera control) using C++
  • i will have to learn CUDA or at least be able to use OpenCV with CUDA optimisations.

It is very heavy work but also very interesting work. And this time, i will have all Nvidia experience and knowledge with me (something that Odroid can’t bring to me). This can make all the difference.

If my works brings some interesting results, it could be use on more powerful platforms (like PC Windows 10 with very good Nvidia GPU).

This is important. You know, astrophotography (mainly planets or Moon captures) needs very high frame rate and actually, it is not compatible with real time filtering to improve acquisition quality (raw captures are really bad and noisy). Sensors are getting better and better but real time filtering could really improve sensor captures.

So, i am waiting for Jetson Nano coming home and i will start to learn C++ and Cuda, hoping for very interesting and good results. I will probably ask a million questions on Jetson Nano forum but i am sure we will find an answer for every question.

The best is to come.

Alain

15 Likes

Hi Alain, thanks for sharing - very interesting project!

Looking forward to seeing the results from Nano, keep us posted.

Hello dusty_nv,

many thanks for your comment.

First, i will try to make my Python software works with Jetson Nano. Then, i will see if i can get some improvements with PyCuda.

During this time, i will get back to C++ (last time i used it, it was in 1994 !). I will rewrite my software using C++ (at least camera setting and acquisition without real time filtering), trying to use GTK+ to replace Tkinter.

When this part will be ok, i will work on real time filtering :

  • first, i will use opencv with Cuda improvements
  • second, i will try to use pure Cuda if i can

It’s a long term project. If everything works well, i will think about using some 3D representation of image luminance (X,Y,Z representation) or other things like that.

I also want to see if i can make astrometry (it’s a bit like image recognition) in order to identify what we can see on a video capture). It could be quite interesting.

[url]http://nova.astrometry.net/[/url]

Maybe Nvidia is already involved with astrophotography treatment, working with NASA or ESO ? In the coming years, there will have some brand new professional telescopes and they will bring a huge among of data. Nvidia culd bring very interesting things to professional astronomy.

Concerning exo planets for example, Google (i am not sure but i thinks it’s Google) made an AI to look at data related to exo planet search. From what i know, the AI founded some exo planets scientists missed.

That kind of subject is really complex and needs very sharp knowledge about image processing and signal treatment. Nvidia has that kind of knowledge.

Have a nice day.

Alain

Hello,

things can really start : i have received my Jetson Nano yesterday (many thanks to the Nvidia Team !).

Here is the beast :

External Media

I still need to get a 5VDC 4A power supply because for now, i use a 5VDC 2A USB power supply which is a bit weak.

Using the Nano for the first time is really simple. Just answer the questions and everything works fine.

As you know, i want to use the Nano to control a planetary camera to look at the sky, using an altazimutal mount and a large field lens.

Jetson Nano will control the camera (acquisition setting, captures) and most important, it will apply real time filtering of the camera pictures (raw format) to improve the pictures quality.

When i talk about real time filtering (or treatment), i mean :

  • luminance corrections
  • noise reduction
  • sharpening
  • gradient removal
  • contrast enhancement

Until now, i wrote my software using Python language and usual libraries like openCV, numpy, Pillow, Tkinter etc.

That is to say i do not use the GPU capabilities of the Nano (for now).

In order to get the best results with the Jetson Nano, i have to work step by step.

Step 1 : making my software run with classical Python environnement.

First, i had to make classical sudo apt-get update and upgrade. No problem with that.

Second : i had to check Python libraries. Python 3.6 is already installed. I installed an IDE (Thonny) and i had to install numpy (the last version), Pillow and PIL.ImageTk.

With the good Python environnement, i was able to run my Python software to control the camera.

Concerning the camera, it is a ZWO ASI178MC colour camera (with Sony IMX178 BSI sensor ; quite good sensor). I use ZWO SDK with a Python wrapper.

My software works great with Jetson Nano. The only problem is that my 5VDC 2A USB power supply is to weak to support the camera on USB3. I need a more powerful power supply. It will come.

But i can start treatment tests with pictures without the camera.

I told about gradient removal. Here are 2 examples of gradient removal with my personal algorithm :

External Media

External Media

This is a quite simple algorithm, using mainly opencv and numpy routines.

For a 3000*2000 pixels picture, it takes Jetson Nano 2s to perform the gradient removal, only using the Arm Cortex A57 CPU (1.5 GHz).

For comparison, a Odroid N2 SBC which have Arm Cortex A73 CPU (1.9GHz) needs 1.5s to perform the gradient removal.

That is to say Jetson Nano CPU performs great.

I made that test on my professional laptop under windows (i7-6700HQ CPU with GFX960 GPU). It needs about 0.2s to perform the gradient removal.

So, my purpose is to use the Jetson Nano to control my camera in my standalone system of sky survey.

As my basic software seems to work using basic Python programming, i will need to improve the treatment speed.

When i will be sure my camera works with the Nano (just waiting for the good power supply), i will start STEP 2.

STEP 2 : try to use PyCUDA.

To be continued.

Alain

PS : i wrote this post with the Nano. It works really great.

Hello,

installing pycuda is a bit complicated for me. I still need to search for informations on the net.

So, i am still in STEP 1 but this time, i have tried to compile OpenCV 4.1 with Cuda optimizations to use this library with my software.

I used the script provided by Nvidia, that is to say :

sudo sudo apt-get purge libopencv

sudo apt-get update
sudo apt-get install -y build-essential cmake git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev
sudo apt-get install -y libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev
sudo apt-get install -y python2.7-dev python3.6-dev python-dev python-numpy python3-numpy
sudo apt-get install -y libtbb2 libtbb-dev libjpeg-dev libpng-dev libtiff-dev libjasper-dev libdc1394-22-dev
sudo apt-get install -y libv4l-dev v4l-utils qv4l2 v4l2ucp
sudo apt-get install -y curl
sudo apt-get update

cd
curl -L https://github.com/opencv/opencv/archive/4.1.0.zip -o opencv-4.1.0.zip
curl -L https://github.com/opencv/opencv_contrib/archive/4.1.0.zip -o opencv_contrib-4.1.0.zip
unzip opencv-4.1.0.zip
unzip opencv_contrib-4.1.0.zip
cd opencv-4.1.0/

mkdir release
cd release/
cmake -D WITH_CUDA=ON -D CUDA_ARCH_BIN=“5.3” -D CUDA_ARCH_PTX=“” -D OPENCV_EXTRA_MODULES_PATH=…/…/opencv_contrib-4.1.0/modules -D WITH_GSTREAMER=ON -D WITH_LIBV4L=ON -D BUILD_opencv_python2=ON -D BUILD_opencv_python3=ON -D BUILD_TESTS=OFF -D BUILD_PERF_TESTS=OFF -D BUILD_EXAMPLES=OFF -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local …
make -j3
sudo make install

cd python_loader
sudo python3 setup.py install

I made a 4 Go swap file to compile opencv.

Well, it seems everything works fine now with opencv 4.1

I made a new test with my gradient removal treatment with this new opencv.

OpenCV 3.3 from SDK : 2s
OpenCV 4.1 with Cuda : 0.5s

Well, that’s really cool. The treatment time fall from 2s to 0.5s. That’s great !

Alain

Hi Alain, great results and images so far, thanks for updating us all! The performance you are getting is encouraging.

Hello dusty_nv,

Yes, it is very encouraging.

To compare with other sbc, i made this test with Odroid N2 which is supposed to be one of the fastest low cost sbc on the market.

The Odroid N2 needs 1.5s to perform the treatment while Jetson Nano only needs 0.5s !

Jetson Nano wins!

Jetson Nano brings real power and great opportunities to my project.

Of course, full power will need C++ and Cuda but i will need time to learn them. That’s a beautiful challenge.

Alain

Hello,

still working on Jetson Nano configuration to start real hard work on it, using Cuda capabilities.

This time, i have succed to install PyCuda on Jetson Nano.

So, to install PyCuda (for Cuda 10.0) on Jetson Nano with JetPack4.2 and Python 3.6 :

export CPATH=$CPATH:/usr/local/cuda-10.0/targets/aarch64-linux/include
export LIBRARY_PATH=$LIBRARY_PATH:/usr/local/cuda-10.0/targets/aarch64-linux/lib
pip3 install pycuda --user

Then, you will have to add this (in order to allow Python to access nvcc) :

export PATH=$PATH:/usr/local/cuda/bin

It worked for me so i guess it will work for you.

That’s all for now.

Alain

I decided to make a treatment test on a Deep Sky (DS) video using the Jetson Nano.

The treatment (home made) is about to remove gradient and light pollution from the video of the sky. Gradient and light pollution are very big problems with astronomy photography and video.

You can see the result here (it’s a bit chaos but it’s a test video); raw video on the left, result of the treatment on the right :

Jetson Nano (using OpenCV 4.1.0) performs really great. The video is 25 ips and Nano can almost make real time treatment. Really really good. I am amazed.

With PyCUDA, i assume i will get better result but it will take me a while to do that because i have to understand how replacing OpenCV routines with Cuda. Quite interesting.

That’s all.

Alain

I like that video. I am curious if take a shuttered image prior to starting, and if the Nano also removes the noise found in the shuttered image on each frame?

Hello linuxdev,

I am not sure i understand well your question but i don’t take shuttered image before starting.

I have not made test yet of sky captures with the Nano because i have soke things to solve before (in faft, must get a stronger power supply and i must control Nano with a VNC cluent with wifi). I just made tests with the camera control. My software works withbthe Nano but my ush power supply is too weak and Nano crash when i plug the camera.

The raw video was taken with a raspherry pi. The Nano just applied the gradient removal filter on the final video.

When the Nano will be ready, it will :

  • control the camera (gain, resolution, binning, exposure time, flip H and V, colour, etc.)
  • control brightness and contrat
  • apply real time filtering like blur, denoise, sharpen, bilateral filter etc.
  • real time gradient removal
  • real time stacking of 2 or 3 frames if needed
  • image capture or video capture (jpg, tif, raw or compressed video)

Everything work but i need small time to fix last details.

The software runs fine with Python and mainly numpy, pillow and opencv.
I would like to modify it with PyCuda to get better performances.

From what i see, Jetson Nano is better than Odroid N2 so Nano will be in charge of the camera and treatments. My RPi 3 b+ will still be in charge of the motorized mount.

Finally, i think all the system will be quite interesting to make some sky survey.

When the nano will be ready for sky captures, i will make videos with and without real time filtering to show what real time filtering can really bring to sky survey (my astronomy camera is quite good but as exposure time is quite small, i have to raise the gain very high and the video capture without treatments is really bad and noisy).

Alain

Hello,

nothing really new but i just wanted to point one thing :

I have compiled Cuda examples (from Cuda toolkit) and try some of them.

Well, how can i say, this is … amazing.

3D demos are just crazy for a so small computer. It blow me away, really.

For my purpose, i have tested image filters like gaussian blur, denoising, etc. The performances are huge, really.

If i can manage such filters with my software, the treatment time will fall dramatically. With that kind of performance, real time treatments applied to planetary acquisitions (needs high frame rate like 60 to 200 fps) will be possible.

So, the truth is Nvidia have made something incredible with the Jetson Nano. Really.

Congrates and thanks for the Jetson Nano.

Now, i have a lot of work to do.

Alain

Hello,

I have made my first test program using Python and PyCuda. It works great.

The test program is quite simple and its only purpose is to compare pycuda and classic Python performance.

For trigonometric calculus with iterations :
Python + Numpy : 4.39s
Python + pycuda 16 blocks 32 threads per block : 0.0042s

Quite impressive.

I made an other test only using numpy simple routine (i multiply 2 big arrays 20 million elements each) :
Python + Numpy : 1.6s
Python + pycuda 16 blocks 32 threads per block : 0.249s

The difference is much smaller.

In fact, the second test used much more python lines than the first. So, we can see numpy is a very good Python library (very fast routines) but if you get some python lines in the algorithm, then you can see easily python is an interpreted language.

Well, i am very pleased with the result. PyCuda is fast and Cuda is not as difficult as i previously thought (but it is quite complex for sure).

I have to think “GPU” and not “CPU” and it is really interesting.

Alain

Hello,

still working with pycuda and Jetson Nano.

This time, i worked on some treatments i get with my software SkyPi (SkyPi is Python software using mainly opencv and numpy; i use it to capture sky video with real time treatments).

My test is about this treatment :

  • use RGB test image (M31 galaxy ; this is not my picture)
  • i make a negative version
  • i change brightness of RBG layers

Here is the test image :

External Media

Here is the test result :

External Media

Here is the code of my test program :

import time
import numpy as np
import cv2
import pycuda.driver as drv
import pycuda.tools
import pycuda.autoinit
from pycuda.compiler import SourceModule
import pycuda.gpuarray as gpuarray
import pycuda.cumath
from pycuda.elementwise import ElementwiseKernel

from pycuda.compiler import SourceModule

Nom_Fichier_entree = "/home/alain/Work/Python/Dev/Traitement/M31s.jpg"
Nom_Fichier_Sortie_CPU = "/home/alain/Work/Python/Dev/Traitement/M31s_traitee_CPU.jpg"
Nom_Fichier_Sortie_GPU = "/home/alain/Work/Python/Dev/Traitement/M31s_traitee_GPU.jpg"

nb_blocks = 2048
nb_threads_per_block = 512
nb_threads_total = nb_blocks * nb_threads_per_block

mod = SourceModule("""
__global__ void traitement_GPU(unsigned char *dest_r, unsigned char *dest_g, unsigned char *dest_b, unsigned char *r, unsigned char *g, unsigned char *b, long int nb_pixels_per_thread)
{
  long int idx = blockDim.x*blockIdx.x + threadIdx.x;
  long int index;
  long int step = idx*nb_pixels_per_thread;

  for(long int n = 0; n < nb_pixels_per_thread; n++) {
    index = n + step;
    dest_r[index] = (int)(255.0*pow(((255.0 - r[index]) / 255.0),1.96));
    dest_g[index] = (int)(255.0*pow(((255.0 - g[index]) / 255.0),1.96));
    dest_b[index] = (int)(255.0*pow(((255.0 - b[index]) / 255.0),1.96));
  }
}
""")

traitement_GPU = mod.get_function("traitement_GPU")

image_brut_CV = cv2.imread(Nom_Fichier_entree,cv2.IMREAD_COLOR)
height,width,layers = image_brut_CV.shape
nb_pixels = height * width
nb_pixels_per_thread = nb_pixels // nb_threads_total
img_b,img_g,img_r = cv2.split(image_brut_CV)
r = img_r
dest_r = np.zeros_like(r)
g = img_g
dest_g = np.zeros_like(g)
b = img_b
dest_b = np.zeros_like(b)

tmp1 = time.time()
img_b = cv2.bitwise_not(img_b)
img_g = cv2.bitwise_not(img_g)
img_r = cv2.bitwise_not(img_r)
img_b=np.uint8(255*(img_b/255)**1.96)
img_g=np.uint8(255*(img_g/255)**1.96)
img_r=np.uint8(255*(img_r/255)**1.96)
image_brut_CPU=cv2.merge((img_b,img_g,img_r))
tmp_CPU = time.time() - tmp1              
print("Termine CPU")
print ("Temps traitement : ",tmp_CPU)
print("")
cv2.imwrite(Nom_Fichier_Sortie_CPU, cv2.cvtColor(image_brut_CPU, cv2.COLOR_BGR2RGB), [int(cv2.IMWRITE_JPEG_QUALITY), 95]) # JPEG file format

print("Test GPU 2048 Block 512 Threads/Block")
tmp1 = time.time()
traitement_GPU(drv.Out(dest_r), drv.Out(dest_g), drv.Out(dest_b), drv.In(r), drv.In(g), drv.In(b),np.int16(nb_pixels_per_thread),block=(nb_threads_per_block,1,1), grid=(nb_blocks,1))
image_brut_GPU=cv2.merge((dest_b,dest_g,dest_r))
tmp_GPU = time.time() - tmp1
print("Termine GPU")
print ("Temps traitement : ",tmp_GPU)
print("")

cv2.imwrite(Nom_Fichier_Sortie_GPU, cv2.cvtColor(image_brut_GPU, cv2.COLOR_BGR2RGB), [int(cv2.IMWRITE_JPEG_QUALITY), 95]) # JPEG file format

The result is interesting.

Python + classical numpy : 2.53 seconds

Python + pycuda : 0.28 seconds

Pycuda version is 9 times faster than classical program.

Well, that’s all for now.

Alain

Hello,

still working on Jetson Nano and PyCuda.

External Media

I have found at work a 5VDC 2.5A power supply to temporary solve my problems with my USB 5VDC power supply (Jetson Nano crash with it and the hot weather does not help very match).

First, i made a better management of blocks and threads number. Now, i get 2D blocks and 2D grid.

I use about 10 filters with my sky survey software. For now, i have succeed to convert 8 of them (using naive method of course, i am a very beginner with Jetson Nano).

Numpy routines are quite easy to use with PyCuda.

Filters like blur, Gaussian blur or sharpen needs convolution filter with different kernels so i had to write a convolution filter using PyCuda. Not a big deal.

I made tests with classical method (numpy + opencv) and PyCuda method.

For a 3096*2080 pixels colour picture (the resolution of my camera) using 8 filters (maximum load) :
Numpy + opencv 4.1.0 : 7.4 seconds
PyCuda : 1.04 seconds

For monochrome picture :
Numpy + opencv 4.1.0 : 2.44 seconds
PyCuda : 0.47 seconds

For a 1544*1040 pixels picture (BIN 2 camera) using 4 filters (typical load) :
Numpy + opencv 4.1.0 : 1.2 seconds
PyCuda : 0.21 seconds

For monochrome picture :
Numpy + opencv 4.1.0 : 0.37 seconds
PyCuda : 0.09 seconds

PyCuda out perform Numpy + openCV 4.1.0 (about 5.7x more speed).

I did not make a test with opencv 3.3 but other tests i made before show opencv 4.1.0 (compiled with opencv sources and Jetson Nano) make me think that PyCuda is 10 to 15 time faster than opencv 3.3.

Now, i need to work on more complex filters like NLM Denoise. I saw the Cuda example but to be honest, it’s quite hard to understand. I think i will need help from Nvidia to write such filter with PyCuda.

When all my filters will be ok, i will put them in my sky survey software to make tests with my camera and the Jetson Nano. Still a lot of work to do.

For newbies like me, i will put my filter routines (PyCuda) on Github in the coming weeks.

Alain

Hello,

i have succeed to put my pycuda filters routines in my software SkyPi. Therefore, SkyPi is gone and now, i have SkyNano.

All the filters work quite well and the real time treatments are much faster than previous SkyPi version (Numpy + OpenCV mainly).

I made a small video of the first indoor test of SkyNano.

During the test, camera gain and exposure time are always the same.

When filters are OFF, we see quite nothing.

When filters are ON, we see quite well.

My camera is a ZWO ASI178MC with Sony IMX178 sensor. The sensor is very sensible but we can see real time treatment can bring some real enhancement of the sensor signal.

During the test, i look at the Maxwell GPU activity :

  • when filters are ON, the GPU activity is high
  • when filters are OFF, the GPU activity is low

The link to see the video :

Now, as i have already said, i must find a way to write a fast denoising filter. Quite difficult task for me.

I also have to find the best way to control the Jetson Nano with a laptop through broadband over line link (with VNC software) in order to test Jetson Nano + camera + lens with the night sky.

Alain

Hello,

things are going on with Jetson Nano and PyCuda.

I worked on denoise routines to replace OpenCV FastNLMeanDenoise routine. Nvidia CUDA example is a very good starting point to do this.

My main program uses Python and opencv, that is to say the images i get from my camera is a 3D array for colour images (rgb).

Denoise CUDA examples use Tex2D so i had to change the KNN and NLM routines to be able to use my 3D array.

First, my routines did not work until i realized Tex2D are values between 0 and 1 (my 3D array is 0 to 255 values).

Now, KNN denoise filter and fast NLM denoise filters using PyCuda work very well.

I made some speed tests comparisons between OpenCV and PyCuda for colour images using a Jetson Nano :

1544*1040 pixels image :

  • OpenCV FastNLMDenoise : 0.92 seconds
  • fast NLM PyCuda : 0.17 seconds
  • KNN PyCuda : 0.068 seconds

3096*2080 pixels image :

  • OpenCV FastNLMDenoise : 2.6 seconds
  • fast NLM PyCuda : 0.67 seconds
  • KNN PyCuda : 0.25 seconds

That’s a really good improvement.

I will put my PyCuda routines in Github for those who could be interested with them.

Now, all my treatment routines are made with PyCuda and my acquisition software (SkyNano) is ready to make first “real conditions” tests.

Alain

1 Like

Fast Hello,

i have upload my denoise filters in Github.

There are 4 files :

  • KNN denoise filter for colour image
  • KNN denoise filter for monochrome image
  • Fast NLM denoise filter for colour image
  • Fast NLM denoise filter for monochrome image

You can see those programs here :

[url]https://github.com/AlainPaillou/PyCuda_Denoise_Filters[/url]

If you have any suggestions to improve those filters, they are welcome.

Alain

Hello,

as you know, SkyNano is ready for deep sky tests (i hope tonight, the sky will be on my side).

I made some changes in SkyNano to be able to make treatments of existing videos. Now, i can load a video, test different filters settings and save the result in a new video. I use CUDA routines for treatments.

I made a test on a Moon video. Here is what my software looks like :

External Media

With the result video, i apply specific treatments for planetary video and here is the final result for the Moon (Copernicus crater) :

I have also made some tests with deep sky videos. Here is my software with DS video during treatment :

External Media

Here is a video of the treatment (with filters on and with filters OFF) :

And here are 2 videos to compare original video (on the left) and treated video (on the right) :

I think treatments are not bad. Thez can’t manage all the situations but they bring real improvements to original videos.

That’s all for now.

Alain

PS : my videos are better than Youtube video looks like !

Hello,

first real test for the Jetson Nano last night !

Here is the setup :

External Media

External Media

On the left, you can see my EAA system (Electronically Assisted Astronomy) with the camera in red (ZWO ASI178MC) and the Canon FD 50mm F/1.4 lens. In the box, a raspberry pi 3B + controls the 2 stepper motors, an absolute gyroscope/compass and the manual control paddle. The system name is Cooper V1.

On the table, you can see an old laptop which is connected to the mount with a VNC client. On the laptop screen :

  • on the left, the VNC window with the program which control the mount (the program is in the raspberry)
  • on the right : Stellarium program to search for targets in the sky

Still on the table : the Jetson Nano linked to the camera with USB3.

This is a test setup. When everything will be ok, i will have :

Outdoor : Cooper V1 with raspberry pi and Jetson Nano inside. Raspberry pi will communicate using WIFI and Jetson Nano will communicate through broadband over line.

Indoor : a PC with 2 screens :

  • 1 screen to get raspberry window through VNC client and Stellarium program (mount control and sky informations)
  • 1 screen to get Jetson Nano windows through VNC client in order to control the camera, apply real time filtering and record videos and pictures.

During this first real test, of course, i had a serious bug and i had issues with real time filtering. Now, it’s fixed.

In fact, i previously made daylight tests and the exposure times were always short. For real conditions tests, i had to set longer exposure time and i had synchronization problems between image acquisition and filtering. But now, it’s ok.

Here is the link to see the first video get with Jetson Nano in real conditions :

I have succeed to track a satellite. It’s Cosmos 1812.

Well, that’s all for now. Still some work to do but i am very pleased with the Jetson Nano. Real time filtering is much faster and this allow me to get more smooth videos and to set filters parameters more easily.

Alain