Jetson Telescope Mount Controller with Video Capturing and Processing

Hello,

We are designing astronomy telescope mount controller with video processing platform. Out of the mount control we want to be able to capture video and images and possibly process them.

The final list of features should be:

  • EQ / AZ mount control
  • Stepper motor Mount Control (RA/HA or ALT/AZ directions)
  • Stepper motor Focuser Control (manual/automatic camera focusing)
  • Joystick controlled movement
  • GPS, Temperature / Humidity / Pressure sensors
  • Assisted mount “north orientation” placement using Magnetometer
  • Assisted mount “water leveling” with Gyroscope/Accelerometer
  • Camera assisted polar alignment
  • Finder camera image capture / processing (for manual sky object navigation)
  • Main camera image capture / processing / storage (planetary and DSO objects)
    • Deep space object imaging
      • high resolution long exposure image capturing / storage
      • dark/white frame capture support
      • possibly realtime image processing and stacking
        • dark frame substraction, bad pixel elimination, noise reduction, contrast and brightness adjustment, color adjustment and enhancement, gradient removal, stacking
    • Planetary imaging
      • high FPS video capture / storage
      • possibly realtime image processing and stacking
        • dark frame substraction, bad pixel elimination, noise reduction, contrast and brightness adjustment, color adjustment and enhancement, gradient removal, adaptive histogram equalization, stacking
    • Satellite tracking and imaging
      • External tracking (INDI)
      • Internal tracking (object detection, trajectory calculation)
  • Guiding camera image capture / processing (mount and control error correction)
  • Automatic object navigation and tracking (Goto)
    • DSO, planets, moon, sun, asteroids, satellites
  • Autoguiding / tracking with necessary error corrections
    • Externally (INDI) controlled tracking / guiding
    • Internally controlled guiding (object detection and error correction)
      • object detection, cyclic error calculation and predictive correction
  • USB (ZWO) filter wheel control
  • WiFi connectivity
  • USB or NVME storage
  • Web application with video live view
    • remote camera and mount setup
    • h264/h265 camera live video stream (pointer scope, main camera)
      • raw image debayer, color space conversion, h264/h265 compression
  • INDI Library integration

The final hardware will be built on:

  • Jetson Nano or Xavier NX, possibly
  • TMC5160 motion controllers in motion controller mode
  • BME280 temperature / pressure / humidity sensor
  • MPU9255 magnetometer/accelerometer/gyroscope
  • Microstack GPS module
  • State buzzer
  • water leveling indicators (led diodes)

Cameras to be used:

  • 2x LOGITECH C270 USB camera (polar alignment, pointer scope)
  • 1x ZWO ASI290MC (COLOR) (planetary imaging, guiding)
  • 1x ZWO ASI1600MM PRO (MONO) (deep space imaging)

Most of described functionalities (except video capturing and processing) on SBC were tested already, at least on the table. Most of the C++ software is in pre-alpha state and ready to be integrated together. Goto/tracking parts was not touched yet. Software for video capturing is currently in progress - just capturing, data storage and live view (filters mentioned above are not currently even tested).

The story behind…

Friend of mine got SkyWatcher 8" and EQ5 unmotorized mount as a present almost 3 years ago. The idea was he will motorize the mount on his own. We thought it will be much easier task to build a goto mount. We didn’t have any experience with astronomy, with mounts, telescopes, optics, nothing. We didn’t inspire by existing goto mounts or open-source projects available on the internet. We’ve found them much later when we were looking why something works in another way we expected or why something does not work at all. During last almost 3 years we found that to build such gadget is really tough task. Not just from electrical and software perspective, but also from mechanical one. We learned a lot of stuff. We’ve found that if mount is not accurate and smooth enough with small backlash, nothing can help to fix it. We’ve found, that if motor drivers are piece of crap, no accurate navigation and tracking can be made. We’ve also found that wrongly soldered wires or shorts costs a lot of money, especially if SBC is fired out (1 PI, 1 Panda, many Arduinos, many motor drivers). In the end we’ve found that it would be much cheaper and easier to buy any commercially available GOTO mount :-) But… It would not be that funny (believe us, also a lot of tears, especially when expensive hardware burned out :)). Once this project will be completely done, everything will be released as open source so everybody will be able to build his own gadget for astrophotography or the use software as example for another project development.

Mechanical part

In order to be possible to motorize the mount we had to find a way how to fit motors on it and how to fix them there. We used some universal iron holders for NEMA17 which was surprisingly easily possible to fix to the mount using few screws. We used GT2 pulleys and belt to connect the motor with both EQ5 mount axis shafts. The GT2 pulley ratio is 1:1 - there is no space on the mount to make bigger ratio but we expected that bigger ratio will be required because of bigger torque and step smoothness. It was relatively easy task despite the fact that motor and mount axes have different diameters and it was hard to find GT2 pulleys in Europe. Later after some experience it was confirmed that 1:1 ratio is not enough to achieve needed tracking accuracy (even with 256 micro steps). We’ve found that stepper motors and drivers are not accurate enough, especially in micro stepping mode and with high load. We’ve found we need at least 0.5 arcsec (micro) step resolution to eliminate driver and stepper inaccuracies. We ended with 1:5 planetary transmission allowing us to have 36864000 (micro) steps per revolution with internal 1:144 mount transmission ration. This is 0.03 arcsec step for 256 micro steps what is good enough. Also, we have 5 times bigger holding torque what is definitely a good thing.

The current version consist of 2 3D printed motor holders which exactly fits the mount positions and allows better control of belt tense. The rest is still same - GT2 belt on GT2 pulleys are still in use. Another additional motor on 3d printed holder is mounted on focuser now. It was also funny as friend of mine bought nice small stepper motor on which manufacturer claimed to have 160ncm holding torque. We measured it. It had 20 so it was useless. Another lesson we learned: don’t trust no-name manufacturers.

Micro stepping alias stepper motor control

The first version we’ve built was based on RPI1 and DRV8825 stepper motor drivers. The whole thing was gamepad controlled. Single tracking speed in one axis was also possible. Once we first navigated to Saturn and it held almost still in the view we were lucky so much. Later we found that the planet is moving forwards and backwards in the eyepiece. For long time we could not find where the problem was. We thought it is a mechanical problem first. Later we thought it is a problem of Linux timer in the RPI or that the timer is not accurate enough because of the Linux is not RTOS and time slices are not predictable. Two years later, when we changed stepper drivers for TMC ones, we realized that DRV8825 is not a driver for applications where high accuracy is required. Bot of them were losing micro steps time to time.

So we found that accurate stepper motor control requires good drivers and accurate pulse source on the CPU side. Once we realized that pulses are not accurate enough we tried to switch to Arduino as a pulse source. But it is not powerful enough to drive two stepper motors accurately all the time, especially at higher speeds - the cpu frequency together with relatively expensive interrupt handling is not simply enough to drive 2 motors with possibly 3rd motor for focuser. But the current version works with Arduino as the step/dir generator and it works relatively well. In order to have the most accurate and smooth movement we need to use more powerful CPU with RTOS running on it, ASIC or some smart stepper driver.

We decided for last variant and the final version will be built on TMC5160 motion controller which allows us just to send target position and/or speed to the driver. It also allow us to set acceleration and deceleration ramps. It is accurate, silent, simply - it is a great driver.

SBC

First we thought that RPI will be enough for everything. Later, after few night astronomy sessions we’ve agreed that because it is necessary to take a lot of stuff with us to the observing site it would be great if number of items we have to carry can decrease. Since then it is increasing only as we have bought more optical components and more accessories to be taken with us but it is different story. We mainly wanted to have most of the stuff in compact piece and to plug as low count of cables as possible. Ideally we wanted to leave at least notebooks we use for image/video capture at home.

We wanted to have just single small computer allowing us to control the mount but also to capture photos and video and storing the data. We tried if RPI4 is capable of high FPS video capture but our performance tests shown us that RPI4 USB (or its internal buses) is not fast enough to transfer FullHD at high frame rate moreover with another video for error correction. Later we decided to try different platform - LattePanda. It have had Arduino built-in and promised fast USB 3.0. It worked well but we were not able to test if it will be also possible to store the data on another USB drive before… it burned.

In the end, as we also wanted to have remote control and we were suspicious that LattePanda will not be enough we decided to go for Jetson Nano. We also wanted to try some real-time image processing.

So I ordered a devkit and when it arrived I was really disappointed. The performance of ZWO camera is really bad on the USB3.0. It could not handle the video capture properly. Stream was lost time to time, framerate was unstable, CPU usage was really high - terrible. I was suspicious there is some bottleneck on the internal PCIE BUS.

So I ordered Xavier NX devkit as there is USB 3.1 and I was hoping it will be better. But the problem was the same. Very high CPU usage and problems with the video stream.

I was looking over NVidia forums if this can be a hardware problem and I’ve found that Nano is really capable of 5Git USB 3.0 transfers and Xavier is capable of 10GBit USB 3.1 transfer. Fortunately, after another 2 days of reverse engineering, debugging, testing and learning USB bus details and libusb I discovered the ZWO API probably uses libusb in wrong and non-efficient way.

After another 2 days of writing the libusb wrapper and avoiding calling of stream handling functions from the ZWO library it was possible to achieve expected frame rates with 0.7% CPU usage. Thanks ZWO. You made us buy Xavier NX. Its nice piece of hardware. Pricy, but very nice and powerful. I wanted to buy it anyway, you just made me to buy it now :). But now, as a compensation, please make camera API open source. Otherwise bugs in libusb and buffer handling will not never get fixed.

Video and image capturing / processing

Currently we are working on video capturing. The idea was to capture FullHD@170/8bit or FullHD@85/12bit video or long exposure images. Captured data will be stored on the disk. Also, the captured data will get processed by CUDA - debayer, color depth conversion, color format conversion and displayed on screen or H264 encoded and RTSP (or HTTP) streamed to the controlling device (such as mobile phone).

Video processing / image filtering

Later on, hopefully some real-time filtering will be applied on the video / images being captured and stored.


Problems we are solving:

  • See eglstream cuda opengl interop for details. Currently, I work on EGL interop. I want to get libusb captured data, pass them to CUDA debayer them and display them on screen (GL textured quad). As I wanted to minimize memcpys I was hoping that EGLStream will help me with this task. And later with passing the data to the video encoder. At least this is what I’ve read in docs. So I wrote OpenGL app using X window to render the content but I am not able to bind the consumer texture to the egl stream.

Problems solved (mainly Jetson related)
  • ZWO usblib: bad performance
    • code must be reworked, but it works somehow

Photos

Version 0 - The first we have seen at our friend and the reason why this project born…

Version 1 - Initial one…
thumbnail_IMG_0009

Version 2 - Current (RPI4 + Arduino Nano)
thumbnail_IMG_0008

Version 2 - Morots, 3D printed case for connectors and box with mount controller
thumbnail_IMG_0010

Version 3 - NextGen devkit with RPI4 and all required hardware

Hi,
For running deep learning inference, We would suggest use DeepStream SDK. Here is the announcement of latest version 5.0.1:
Announcing DeepStream 5.0.1
The implementation is based on gstreamer.

And you may also try jetson_multimedia_api. The frame capture is through v4l2 interface. We have two samples:

/usr/src/jetson_multimedia_api/samples/v4l2cuda
/usr/src/jetson_multimedia_api/samples/12_camera_v4l2_cuda

If your source supports UVC driver, you may try the samples. We don’t have experience in capturing through libusb. Maybe you can check the existing samples and do integration.
Document of jetson_multimedia_api:
https://docs.nvidia.com/jetson/l4t-multimedia/index.html

Hello, unfortunately, the camera does not support UVC. But it is not a problem. I can get the data from it and pass it to CUDA. I also can process it, but I need to check the result somehow - ideally display the result on the screen. I am having problems with passing the data from CUDA to OpenGL. I prefer to use some low level APIs such as EGLStream to pass the data from CUDA to OGL Texture or to Video Encoder. As far as I understand it should be possible using EGL without need of memory data copying. I splitted theproject post from problem posts. Please see EGL problem post for details.

Hello,

great project. I will follow it if you post some informations here.

I did something similar but more simple i guess. You probably lready saw my topic here :

Wish you the best for this beautiful project.

Alain

Hello Alain,

thanks for your interest. That’s for sure I have seen your project and I was really impressed from it. Especially from videos showing moon in colors :) and satellite tracking / object detection. I wanted to contact you later when I wanted to implement video / image filtering in order to discuss some CUDA and math stuff.

Currently I have just one question. Did you experienced any problems with performance of the ZWO API? I was not able to get reasonable and stable frame rate at highest possible resolution the ASI290MC can give us. I had to “hack” the API and implement streaming on my own, but currently I can use just one resolution / frame rate. I need to hack it more or I have to convince ZWO to make API open source. At least the part where no camera related data and code is present.

I don’t have much experience with zwo api.
I use python for my software and a python binning to use the sdk library. So, it is far from being optimized for highest frame rate !
My main purpose was to apply real time filtering with quite low frame rate, considering it is hard to get very fast treatments (i mean treatment faster than 10ms) so frame rate was not the key point for me.
To get descent treatment time, you will need Cuda for sure. Opencv is interesting but too slow and opencv with cuda is useless from what i saw.
Jetson Xavier NX will give you much better result than Jetson Nano. The Nano is a bit weak for reasonable frame rate with some real time filtering. It can do the job but Xavier NX will je more suitable.

Considering zwo (in fact considering Chinese firms), they use you as much as they can but they will never help you if there is not direct interest for them even if you buy their products. I stopped any relation with Chinese firm.

It’s about the same with French firms. Impossible to get any kind of support with French firms.

From what i know, only US firms like Nvidia can provide a win win collaboration for free.

I hope you will be able to get some real support from zwo but i doubt about that. Just have to pray !

Alain

Hi,
For using hardware encoder, please refer to 01_video_encode. The buffer has to be NvBuffer. You need to allocate NvBuffers and copy the video data into the buffers. The following function calls can get CUDA pointer so that you can do copy through CUDA:

NvEGLImageFromFd();
cuGraphicsEGLRegisterImage();
cuGraphicsResourceGetMappedEglFrame();

Uf, I was afraid of such answer. So I will have to reverse engineer the API library…
Nevermind, thanks for info I’ll keep you updated about the project status.

Stupid comment and generalisation, which of course is not true even in the nVidia side.
You are in multinational forum, please don’t generalise and have a minimum of education.

Hello fpsychosis,

I understand what you mean.

Don’t worry about my education. It is not that bad. But french people have the bad habit to say things without any kind of “filter”. Sorry for that.

Have a nice day.

Alain

Hi all,

just a small update. After some research I have to say the original ZWO API library is piece of crap in meaning of USB data transfer handling. I tried to do some test on my own and I was able to read data at 2.5GBit with CPU usage between 0.5 - 3%. They probably wrongly handle bulk transfers and they are also probably copying memory too many times so when I try to capture with original I am glad to see 100 FPS with 100% cpu usage…

I tried to write to the ZWO on their forum and they told me they will fix the library and also they will not make it open :) So hopefully they will do it.

Yesterday I was looking if somebody else was fighting the same problem and I have found a “patch” to the ZWO camera library:

I tried it and I am able to capture video on 170 FPS on Xavier with CPU usage around 20%. Uf. I wanted to write it on my own until ZWO will fix it. I am glad I don’t need to :-D

Finally, I tried to use CUDA (NVidia NPP library) to apply first filter on the video… And I am able to Debayer and display the camera data on Xavier using OpenGL texture with no problems. The performance is satisfying - 1936x1096@170 FPS, CPU usage around 55% and GPU usage around 60% but there will not be OGL rendering in future. Also I am still not sure if the video filtering will be done on such high framerate, but I am glad it works now. Definitely I will store the original RAW video to AVI on the NVME drive for offline processing in uncompressed AVI file.

Another thing I tried to achieve was to stream low-latency video from Xavier over network. For this purpose I tried the Logitech C270 WebCam which is supporting V4L so it was easy to capture / stream the video using FFMPEG in software mode and GStreamer in Hardware Encoding mode. I lost a few days with trying to send the video to the HTML5 Video Element using various methods, but I didn’t find a way how to get the latency under few seconds (4 or more). There may be a way using the WebRTC technology which I didn’t try yet, because I’ve found much easier solution.

I am using the following WebServer / JavaScript player to stream the MPEG4 (H264) video from Xavier to any device. I tested all devices I own and all of them were working correctly, with no problems. It work great and I would recommend it to all who want to stream video from any device (cars, drones and so on).

I was trying to tune the GStreamer HW encoding for a while but I didn’t find a big difference in various settings of the omxh264enc encoder. There is another HW encoder but is can’t be used as it is not optimized for low-latency encoding. I didn’t do any measurements but I guess the latency was under 200ms in total over gigabit ethernet network. I need to perform other tests over WiFi but I expect very similar results and I am really satisfied with it. In the end, I will stream 3 videos (1x 4k@30 max, 2x 1080p@30 max) at once from 3 cameras - the main one, the guiding one and the pointing one. Additionally there will be polar finder camera but it will not be necessary to stream this video simultaneously with others.

The self-explaining command I used in the end is:

gst-launch-1.0 -ev v4l2src device=/dev/video0 ! 'video/x-raw, width=640, height =480' ! nvvidconv ! omxh264enc ! 'video/h264, stream-format=byte-stream' ! tcpclientsink port=5000

Please note the WS-AVC server was listening on port 5000 for incoming video to re-stream it to any client connected to it on 8081 over WebSockets.


I am glad bought Xavier. It is really nice piece of the hardware. Thanks for it NVidia ;)

Now I will continue with video encoding (I try to pass data directly from the CUDA to encoder) and streaming. Once I will finish this I will focus on web server and web code (camera control, storage management, playback of stored files and so on).

Later I plan to publish sources, guides and videos how I achieved some goals, but currently it is to early - I am just testing how to do that.

1 Like

Another small update - I Switched to WebRTC and it works much better than software JavaScript decoder. Out of the fact the browser is using hardware for video decoding the latency further decreased (but I still didn’t measure). I am using Janus WebRTC server and the streaming demo.

https://janus.conf.meetecho.com/

Regarding the Janus WebRTC server I less or more used the following article:

but as we are on Ubuntu, for build it is better to use guide from the Janus server on GitHub.

and currently, I am not using the Nginx to serve static HTTP content but the nodejs server from ws-avc-player - I was lazy to install and configure it :) but in the end I will use it as a proxy to nodejs and Janus so they will be hosted on single address and port.

The GStreamer pipeline was simplified so no conversions are done by GStreamer at all. This allows me to stream 1080p@30 with about 30% cpu usage. Please note that most of the cpu usage (about 20%) is consumed by the capture video capture application. This application captures the video from ZWO cam @170 FPS using the improved ZWO API lib, extracts 30 FPS, CUDA debayers them, CUSA converts RGB to I420 and streams it to GStreamer over TCP. GStreamer then parses the RAW video, encodes it to h264 using omxh264enc at 10Mbit bitrate. Finally, it converts the stream to rt h264 payload and UDP streams it to Janus streaming plugin.

gst-launch-1.0 -ev tcpserversrc port=6000 ~ 'video/x-raw, width=1936, height=1096, format=I420, framerate=30/1' ! rawvideoparse use-sink-caps=true ! omxh264enc bitrate=10000000 ! rth264pay config-interval=1 ! udpsink host=127.0.0.1 port=8004

I also tried 60FPS but the GStreamer then consumed 100%+ cpu and video shutters time to time. Further optimization would be avoiding using of the GStreamer completely and encode video / and create rt payload directly in the video capture app.

Additionally I tried bigger bitrate, but the video shutters time to time on the client although both devices are connected over LAN, I need to investigate this further as I don’t know what component in the chain causes this. I would like to be able to stream 1080p@60@40Mbit on local LAN although in the end it will not be needed. The target app will use 15 or 30 fps - it will be fair enough for the target usage so bitrate around 10Mbit will be enough.

The second v4l2 compatible camera is streamed to the Janus completely using the GStreamer with no additional custom software involved. I also switched to 30fps but the video incoming from the camera must be then image/jpeg format. In the end it will be necessary to process this video in custom guiding software or INDI driver will be used in order to be possible to use 3rd party guiding software - as there is requirement to track satellites precisely, custom guiding software will be probably needed.

The current GStreamer command line for Logitech c270 v4l2 compatible camera to stream h264 video through Janus WebRTC is:

gst-launch-1.0 v4l2src device=/dev/video0 ! 'image/jpeg, width=1280, height=960, framerate=30/1' ! jpegdec ! omxh264enc ! rtph264pay config-interval=1 ! udpsink host=127.0.0.1 port 8005