The engineer told me the preempt_rt patch should be active manually, as the step2 in Building the Jetson Linux Kernel shows, just like the jetson_clocks?
I found this github about install Preempt_RT on jetson board
But this guide just tested on Xavier developer kit and Jetson Nano, and is recommended based on Ubuntu 20.04.6 LTS, while Jetson Linux version is 36.3.0. which Ubuntu system is different from l4t36.3 official ubuntu version 22.04.
So I’m not sure it will work or will broke the system.
I want to know which method I should take to active Preempt_RT?
Just follow the github shows?
Or follow the official kernel customization docs? If the kernel customization docs work, how much step should I take? just till step2 in Building the Jetson Linux Kernel, or finish the Building the Jetson Linux Kernel block? or more steps?
If I want to get the maximum performance of official Orin development kit, only update real-time kernel is enough? Or I still need to build Building the Jetson Linux Kernel?
We may have the requirement of multiple precesses. So……I need to make the Preempt_RT patch, Right?
Or could you tell me, If I did the Preempt_RT patch, jetson36.3 system can inference normally? Will the Preempt_RT patch affect the llm/vlm model inference speed and result?
Hi,
Please set up both environments and give it a try. Ideally it should have similar throughput on the two setups, but it is case-by-case. Would suggest set up the real environments and do comparison.
I heard that on the RTX 4090 GPU, If made the Preempt_RT patch, The system can not find the GPU board. So I am afraid that the Preempt_RT patch will affect the Orin development kit’s AI performance, even broken down the whole AI inference system.