Electronically Assisted Astronomy with a Jetson Nano

Hello linuxdev,

for now with JetsonSky as it is, the “smearing” or “ghosting” comes from the noise removal functions. Adaptation absorber noise removal filter or 3 frames noise removal filter works really great with static videos. If stars move, i get a ghosting effect. Exposure time is not responsible for smearing.

If i could use AI to manage noise, it would be great because i won’t get this ghosting effect any more. The problem is : “i don’t understand a single word about AI”. For sure, i can make run a tutorial but this does not match my needs.

One day, if light comes in my mind, maybe i will be able to try to use AI to clean my RAW videos but for now, my brain is not IA compatible !

Alain

😀 I understand about not understanding!

The topic is still interesting. Consider that if you were to preprogram some movement by a small amount on the camera mount, with a long enough exposure, then “real” objects would produce that ghosting/smearing. Any pattern that exists after that is all noise from the sensor. Think of it as a poor man’s version of covering with a lens cap and taking a dark exposure for a couple of minutes to see what the sensor produces. The difference is that this takes about half a second or a second of movement, and occurs at every movement, whereas the other has to run at the right temperature for significant time before starting (but is likely more accurate). The pixels which do not change at a given CCD cell are suspect, and if movement in one direction changes intensity, then it is due to actual light; pixels which do not change to their neighbor value during a movement must be part of the sensor.

It is just “food for thought”. There are likely ways to detect “unchanged” versus “changed” based on known camera movement, but it is more a curiosity.

oh yes, i understand.

This could avoid classical sensor calibration. AADF filter do thing a bit close but not exactly what you describe.

I will think about that !

Alain

Hello,

i decided to try object detection using AI. Reading things on the internet, i decided to try to use YOLOV8 with Pytorch.
My first exercise is about Moon carters automatic detection.
First, i had to find some good Moon photos. Luckily, i have some.
Then, i had to make a dataset with annotated photos with YOLOv8 format. After some web research, i used Roboflow online tool to do this.
A also had to get some small Python programs to train the custom model and test it with images and videos.
I also had to make tests with the dataset and training parameters to get something usable.
At least, i had to put all those things in JetsonSky.

Here is my very first try :

For sure, it is not perfect but it works (i mean the main idea).

Now, i need to get more photo in my dataset and maybe i will need to create craters categories (small, large, complex) in order to get better detection.

Anyway, i am quite happy with the result. It only took me few hours to understand the global idea of objects detection and get my first results.

I will test this with the Jetson AGX Orin ASAP.

Alain

Hello,

i tried to improve my craters dataset to get better results.

I still use YOLOv8L pre trained model to train my crater model.

Training a model needs a big GPU. With my laptop RTX4080, it takes 2 hours to train the model (only 300 images, batch-size = 12 because of limited GPU RAM, epochs = 150).

I understand now why NVidia created H100 or A100 systems with huge amount of memory.

Here is a more complete video about my new model :

If i consider i trained the model with only 300 images, the results are not bad.

To get something more serious, i would need 10 times more images but training the model would be a bit long.

I will test this new version of JetsonSky (with crater model) as soon as i will have solved a problem with keyboard management under linux (needs to be root, quite stupid thing).

JetsonSky get now some little AI !

Alain

Thing are going there way with crater model prediction.

I have tested object tracking with YOLOv8 with persist option and this works great, even if the calculus time is longer.

I will try to make a model for satellites detection (and shouting stars). Hope it will work.

Alain

Hello,

i made a YOLOv8 model to detect satellites, planes (not really good) and shooting stars.

I tested it (prediction & tracking) with JetsonSky and it works no so bad with JetsonSky.

It also works (crater & satellites detection & tracking) with the AGX Orin. I just tested the models with the AGX Orin this afternoon.

I am very pleased with that. To celebrate this, i will work on a up to date version of JetsonSky for Jetson Orin and i will provide it if someone wants it for its personel use (no commercial use). People who wnts JetsonSky will have to ask me for it.

But i need some time to get a clean and up to date JetsonSky for the Orin.

Alain

Hi,

here is a video about satellites detection.

I show the RAW video, the preprocessing and 2 detection method :

  • my old OpenCV simple blob detection method
  • the YOLOv8n based model to detect satellites

The result is interesting.

Alain

Hi,

i need help.

Some functions in JetsonSky need to get some keyboard information (key pressed to do things).

With Windows system, i use keyboard python library.

Under Linux, keyboard library needs to be root.

If i launch JetsonSky with this command : ‘sudo python JetsonSky.py’ i have :

  • no problem with keyboard
  • problems with other libraries : Python does not found those libraries

When i installed keyboard, i used ‘sudo pip install keyboard’. I did not use ‘sudo pip install …’ for the other libraries.

Keyboard is starting to boring me.

Does anyone know an other wait to easily get kind of keypressed function (the function must not wait for a key to be pressed and return the key value) with Linux and Python ?

Alain