Jetson Orin Nano 8Gb

I am developing a drone with Jetson Orin Nano. The idea is for it to be autonomous, or close to it. To do so, I am using a series of sensors, including an Intel RealSense D455, and implementing the ISAAC ROS VISUAL SLAM algorithm with ROS2.

The project is progressing well, but I confess that I am new to the Jetson and ROS world, so there are some questions I would like the community’s help in answering.

The main question I have is about how I should deal with tasks that require real-time execution, such as the execution of the control system, which requires sensor reading, agile processing, and motor activation. I understand that theoretically, the Jetson has enough computational power to handle these issues, but because it is running a non-real-time operating system, I am concerned. I’ve seen some projects where the Jetson delegates this type of task to external hardware (flight controller), and I even believe that would be the best solution for my project, since the current version of my drone is already controlled by a controller implemented with ESP32. However, I confess that I would like to centralize all logic on the Jetson, if possible.

Therefore, I would like to know the community’s opinion and why:

1- Centralize all logic on the Jetson, including control systems.

2- Use the Jetson only for executing computer vision algorithms (VSLAM, etc.) and delegate the control system to external hardware (flight controller).

This is only opinion, and does not have any hard or exact answer. What you are talking about is hard realtime for control systems. Jetsons do not have the architecture for hard realtime. Mostly it is ARM Cortex-A series, which is incapable of “true” hard realtime. ARM actually defines a fairly fine grain of different levels of realtime, and it is true that you improve Linux on a Cortex-A (or even on an x86_64/amd64 PC) via an RT kernel extension, but the hardware itself will never be true hard realtime on this hardware (at best it can be a “soft” realtime).

If you don’t have a lot of processing to do, e.g., there is not much in the way of complicated multitasking, then the inexpensive Cortex-M series can do hard realtime (without “functional safety”). As the workload goes up, some of the hardware assist in the Cortex-R series starts becoming important. Cortex-R can handle more complicated scheduling than can Cortex-M, plus, if desired, in some cases Cortex-R can be set up with shadow cores which automatically take over if there is a hardware failure (this is what “functional safety” is; both hardware and software work seamlessly to keep functioning without any delay if something fails).

All desktop PCs, and all Cortex-A series will have lots of caching and buffering involved in many of the things they do. Whenever there is multitasking, and one process has its context saved before another process takes a time slice from a core, there is a chance that the cache will no longer be useful to the new process (imagine two threads of one process sharing data: Those might not cause a cache miss), and there will be a cache miss. There will be a short burst of delay while the cache is updated. Had the other process not run, then the cache probably would not cause such a delay. Even in a single thread though, if enough data is needed, there might still be a cache miss, it’ll just be less often.

So what would happen if your logic for controlling flight suddenly stops for some time slice? Your drone probably will have something setting up engine speed/thrust, and whatever it is doing, e.g., going straight, turning, landing, so on, will keep going. That’s ok if the burst of time slice where it goes catatonic is 1 ms, but as the speed goes up, perhaps for something which is actually supersonic, or for an automobile running 75 m.p.h. down a highway in bumper-to-bumper traffic, this could be fatal.

One way around this, if your sensors and logic are “on average” rather fast, is to buffer more. As long as the buffer is say capable of holding all commands for 10 seconds of flight, and if on average the sensors and logic keep at least some content in that buffer, you’ll get smooth flight without error. But what happens if something unexpected happens, and that 10 second buffer is invalidated? That’s the problem with buffering: It adds latency. Anything can happen during that latency.

With a slower drone, or with a robotic device driving on the ground at slow speed, buffering probably won’t hurt. Minor delays in sensing versus taking action won’t hurt anything. Jetsons are perfectly fine for slow crawling land robots, e.g., they can be used in office mail delivery robots without any issue. Aerial drones start to matter; some such drones are faster than others, and even a low end Jetson might work with a slow moving drone that slowly maps out the inside of a building; a faster (but otherwise similar) drone might require the fastest of the Jetsons, the Orin, and then be ok. The Orin Nano (for this forum) is probably only beaten in performance by the AGX Orin, but for the most part, to get a compact system, the Orin Nano or NX are maybe the best choice if you don’t have a separate control and sensing setup.

If you want “completely-and-always-perfect behavior”, then you probably need everything running in Cortex-R. If safety matters, e.g., it is for avionics in a passenger jet during landing in zero visibility, then you must use only the Cortex-R. For Jetsons, depending on what you are doing, it is not a bad idea to have a separate control system. I don’t know if the RISC-V can do (A) soft realtime or (B) different degrees of hard realtime. If you have a CPU core dedicated to nothing but motor control, then even if this is not hard realtime, then you’ve probably improved the timing of control. Maybe it is better with that as a controller.

There are also a number of SoCs or SBCs which have a fairly low power Cortex-A CPU, and then several low power Cortex-M cores. One could assign a different Cortex-M core to each motor, and this would be pretty close to hard realtime (something has to talk to the Cortex-M, and this is the only reason why such a case would not be fully hard realtime).

The Orin Nano is probably capable of what you want for a number of circumstances (it’s fast, but it has latencies that change and vary), but you have not given details of what your drone is doing. There is a reason why the DJI “world’s fastest” drones use high end embedded systems (and I’m guessing Orin of some sort these days, but I have not looked). There are also reasons why you cannot use a Jetson on anything requiring either functional safety or truly high performance. Even if it were legal, you couldn’t use a Jetson for aircraft avionics just because there would be moments when the avionics freeze before continuing. I’m not the right guy to ask about a specific use, but if you give more details about things like how fast it is, what the required frame rate of sensors is (without dropping a frame), then others might be able to comment.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.