Electronically Assisted Astronomy with a Jetson Nano

Hello,

i have successfully installed CUPY on the Jetson Xavier NX.

Very easy to install (many thanks kmaehashi for your help) :

https://docs.cupy.dev/en/latest/install.html#installing-cupy

If you have JetPack 4.X.X :

pip install cupy-cuda102 -f https://pip.cupy.dev/aarch64

If you have JetPack 5.0.2 :

pip install cupy-cuda11x -f https://pip.cupy.dev/aarch64

CUPY is really interesting it i guess it is worth giving it a try.

Alain

1 Like

Hello Honey-Patouceul,

i tried your code with the Jetson Xvier NX and it works. Just a little problem with colors (red and Blue channels are inverted).

Concerning speed, i am not really sure it is faster than with opencv. I will see that.

Alain

Hi Alain,
You may tell if this issue is seen with HQ (raw) format, encoded (H264) or both.

I did remove a RGB2BGR conversion that I felt not needed when using JetsonTreatement, but if the pushed frames are in RGB format rather than BGR, you would try to adjust such as:

gst_pipe = "appsrc ! video/x-raw,format=RGB,width=...

Hello,

the issue is for both format. I will try your proposal.

I made a test with vcpkg.exe with my laptop. Interesting. Unfortunately, it seems there is an issue with CUDA 11.6. I will give it a another try later.

I also tried to compile opencv with classical method (Visual Studio) but i get some compile errors. Not a big surprise for me. It is quite funny to try to compile a big library. 3 hours to get errors. When i will have time, i will make a clean reinstall on my laptop. I guess i have something not really clean in the laptop.

Alain

Hello everyone,

i quick message to wish you a merry Christmas !

Many thanks to people who participate to this topic.

Special thanks to Dusty_nv !

Alain

Have a good christmas time too and take care of yourself!

1 Like

Happy new year !

Alain

Hello,

this year, i will get back with Moon and planetary imaging. My Celestron C9.25 is sleeping for too much time.

I will still work on colours of the Moon and i also plan to test my adaptative absorber noise removal filter with atmospheric turbulences.

As Moon and planetary imaging need very high frame rate, i will have to find the way to get a more powerful laptop with as many as possible CUDA cores. Jensen, if you read me, i would like to talk with you.

Alain

I have tried out chatgpt which gives me this alternatives to the code-line for video-writing. Maybe it is worth to look at:

Here are a few alternative ways to write the line of code you provided:

  1. Using the opencv-python-headless library:

Copy code

video = cv2.VideoWriter(os.path.join(video_path,nom_video), cv2.CAP_FFMPEG, 25, (width, height), isColor = True)
  1. Using the moviepy library:

Copy code

from moviepy.video.io.VideoFileClip import VideoFileClip video = VideoFileClip(os.path.join(video_path,nom_video)).fps(25).resize((width,height)).write_videofile(nom_video)
  1. Using the pyav library:

Copy code

import av video = av.open(os.path.join(video_path,nom_video), 'w') stream = video.add_stream('vp9', 25) stream.width = width stream.height = height

Please keep in mind that, depending on the library and the specific use case, you might have to adjust the parameters and methods used in these alternatives.

One last try:

Here are a few alternative ways to write the line of code you provided using OpenCV:

  1. Using the cv2.CAP_FFMPEG instead of fourcc:

Copy code

video = cv2.VideoWriter(os.path.join(video_path,nom_video), cv2.CAP_FFMPEG, 25, (width, height), isColor = True)
  1. Using the cv2.CAP_GSTREAMER instead of fourcc:

Copy code

video = cv2.VideoWriter(os.path.join(video_path,nom_video), cv2.CAP_GSTREAMER, 25, (width, height), isColor = True)
  1. Using the cv2.CAP_AVFOUNDATION instead of fourcc on macOS:

Copy code

video = cv2.VideoWriter(os.path.join(video_path,nom_video), cv2.CAP_AVFOUNDATION, 25, (width, height), isColor = True)
  1. Using the cv2.CAP_DSHOW instead of fourcc on Windows:

Copy code

video = cv2.VideoWriter(os.path.join(video_path,nom_video), cv2.CAP_DSHOW, 25, (width, height), isColor = True)
  1. Using the cv2.CAP_MSMF instead of fourcc on Windows:

Copy code

video = cv2.VideoWriter(os.path.join(video_path,nom_video), cv2.CAP_MSMF, 25, (width, height), isColor = True)

It’s important to note that, depending on the

Don’t know if that will help and yes I know, it is not as easy to use this chat bot and then all problems are gone. It is only a try for finding some (new) ideas.

I doubt that chatGPT has ever tried what it proposes ;-)

The generic call of opencv VideoWriter would be:

cv2.VideoWriter(desc_str, VideoBackend, 4CC, float(framerate), (int(w),int(h)), bool(isColor))

where VideoBackend is the backend API, such as cv2.CAP_FFMPEG, cv2.CAP_GSTREAMER…

If the backend is FFMPEG, then the container type would be guessed from the file extension of the filepath given as desc_str, and the encoding would be according to the 4CC, such as:

# This would encode with X264 codec and mux into avi container
cv2.VideoWriter('Test.avi', cv2.CAP_FFMPEG, cv2.VideoWriter_fourcc(*'X264'), float(25), ...

If the backend is GSTREAMER, the desc_str is the pipeline from appsrc to the wanted sink, and CC4 is 0 (RAW):

# This would encode into H264 with HW NVENC on Jetson and mux into an AVI container:
cv2.VideoWriter('appsrc ! queue ! video/x-raw,format=BGR ! videoconvert ! video/x-raw,format=BGRx ! nvvidconv ! video/x-raw(memory:NVMM,format=NV12 ! nvv4l2h264enc ! h264parse ! avimux ! filesink location=Test.avi', cv2.CAP_GSTREAMER, 0, float(25), ...

Hope this will help chatGPT or else for next versions ;-)

LOL, ok it was a try ;-)))))))))))))))))

Hi guys,

ChatGPT will replace us all soon ! Or not.

Alain

ChatGPT still has a lot of training left. See:
https://www.tomshardware.com/news/chatgpt-told-me-break-my-cpu

Skynet is a bit young but he will be back stronger and smarter!

Yesterday, a quick deep sky survey, using AGX Orin and JetsonSky :

Many satellites.

I added HDR capture in a beta version of JetsonSky. I need to test it on easy targets like M42 (Orion Nebula, very hard to get without HDR because of the brightness of the nebula). Need some time to make such tests.

Alain

Hello,

last night test with JetsonSky and the AGX Orin. The main target was Orion nebula (M42).

I used 50mm F1.4 and 135mm F2.5 lens.

The result is not so bad but not really good. The 135mm gives more things but F2.5 is too much. My kingdom for a Canon 85mm F1.2 !

HDR did not give expected results. But as the mount was not calibrated, there were no earth rotation compensation and as the exposure time was a bit long (from 500ms to 1s), i guess the HDR routine could not work properly. Sometimes it worked and the result was interesting with M42 but most of time, it was an epic fail.

Still some work to do.

Alain

Hello,

I was thinking about further really improvements for my project and i need to consider to get better equipment (lens, cameras, laptop computer with many CUDA cores) to get significant better results.

I am not sure it is the right place to ask for help and if it is not, i apologize and i will delete this post.

Well, i do need better equipment but as my equipment is not that bad, it means i need expensive equipment. For 8 years, i bought myself my equipment but this time, the level is a bit high.

So, a friend of mine told me to make a funding pot. So i did.

I don’t know if you can participate from an other country than France.

So, if you find my work interesting and you want to help me getting better equipment, you can go there :

It is all in French and i am very sorry for that.

The fund target is very high so don’t be surprised. I try to get very good equipment and very good equipment is always very expensive.

You can also share the funding pot if you want. This will allow me to touch more people.

Many thanks in advance.

Alain

It seems I had a bad idea. Sometime, we choose wrong solution to solve a problem.

Anyway, I continue my work but I reduce support. Just focusing on my needs.

So, forget previous post. You can post here freely without fear, I won’t ask you for money, I swear 😜

Clear sky.

Alain

To not forget :

pip install pycuda --user
sudo ln -s /usr/include/locale.h /usr/include/xlocale.h