Linux-headers for AGX Orin

Hi,
I want to install linux kernel headers for Orin, but I did not find any one on apt sources. And I did not find a download link on nvidia developer forums.
The system information shows: Linux tegra-ubuntu 5.10.120-rt70-tegra #5 SMP PREEMPT RT Thu Jun 1 15:55:46 CST 2023 aarch64 aarch64 aarch64 GNU/Linux
Great appreciate for any one’s help.

Please provide the following info (tick the boxes after creating this topic):
Software Version
[√] DRIVE OS 6.0.6
DRIVE OS 6.0.5
DRIVE OS 6.0.4 (rev. 1)
DRIVE OS 6.0.4 SDK
other

Target Operating System
[√] Linux
QNX
other

Hardware Platform
DRIVE AGX Orin Developer Kit (940-63710-0010-300)
DRIVE AGX Orin Developer Kit (940-63710-0010-200)
DRIVE AGX Orin Developer Kit (940-63710-0010-100)
DRIVE AGX Orin Developer Kit (940-63710-0010-D00)
DRIVE AGX Orin Developer Kit (940-63710-0010-C00)
[√] DRIVE AGX Orin Developer Kit (not sure its number)
other

SDK Manager Version
1.9.2.10884
[√] other

Host Machine Version
[√] native Ubuntu Linux 20.04 Host installed with SDK Manager
native Ubuntu Linux 20.04 Host installed with DRIVE OS Docker Containers
native Ubuntu Linux 18.04 Host installed with DRIVE OS Docker Containers
other

Please refer to Compiling the Kernel (NVIDIA DRIVE OS Linux) and locate the source files in the specified path on your host system.

After installing DRIVE OS using the SDK Manager, you should be able to find the relevant files under ~/nvidia/nvidia_sdk/DRIVE_OS_6.0.6_SDK_Linux_DRIVE_AGX_ORIN_DEVKITS/DRIVEOS/drive-linux/kernel/source/oss_src.

Hi, @VickNV
I have been installed DRIVE OS using the SDK Manager, and I can find the relevant files you mentioned. But what should I do after that? Thanks for your help.

Dear @haomiao.wang,
May I know why you need linux kernel headers? Did you check the steps pointed by vick Compiling the Kernel (NVIDIA DRIVE OS Linux)

Dear @SivaRamaKrishnaNV,
Yeah, I have executed by the reference. The reason why I need linux kernel headers is that: I want use bpf tools, I have test bcc and bpftrace on orin-x, but I can only get user stack trace. There is no kernel stack trace. The kernel configure I have all enabled according to the bcc or bpftrace’s document. The difference from my pc with x86_64 I can only thinking about is that the linux kernel headers. I want to test it with linux kernel headers on orin-x.

Did you try on P3710 or IPU04?

If P3710, could you provide more details about the specific BPF program that was executed? Please share your experience and any relevant information.

Hi, @VickNV
My device is IPU04. I used the offcputime and profile of bcc tools. For bpftrace, I used a program written by Brendan Gregg offcputime.bt.
For the offcputime of bcc, I have posted an issue on iovisor/bcc.
For the profile, I can post some log below:

  1

[unknown]
[unknown]
[unknown]
[unknown]
[unknown]
[unknown]
[unknown]
[unknown]
[unknown]
[unknown]
[unknown]
[unknown]
ioctl
[unknown]
[unknown]
[unknown]
[unknown]
[unknown]
start_thread
[unknown]
-                MediaConsumer6 (980)
    1

[unknown]
[unknown]
[unknown]
[unknown]
[unknown]
[unknown]
[unknown]
[unknown]
[unknown]
[unknown]
[unknown]
[unknown]
wait4
_IO_proc_close
_IO_file_close_it
fclose
[unknown]
[unknown]
[unknown]
[unknown]
[unknown]
start_thread
[unknown]
-                AA_DiagApp_M (2282)
    1

[unknown]
std::chrono::_V2::steady_clock::now()
Timer::getTime()
Timer::Timer(double, std::function<void ()>)::{lambda()#1}::operator()() const
void std::__invoke_impl<void, Timer::Timer(double, std::function<void ()>)::{lambda()#1}>(std::__invoke_other, Timer::Timer(double, std::function<void ()>)::{lambda()#1}&&)
std::__invoke_result<Timer::Timer(double, std::function<void ()>)::{lambda()#1}>::type std::__invoke<Timer::Timer(double, std::function<void ()>)::{lambda()#1}>(std::__invoke_result&&, (Timer::Timer(double, std::function<void ()>)::{lambda()#1}&&)...)
void std::thread::_Invoker<std::tuple<Timer::Timer(double, std::function<void ()>)::{lambda()#1}> >::_M_invoke<0ul>(std::_Index_tuple<0ul>)
std::thread::_Invoker<std::tuple<Timer::Timer(double, std::function<void ()>)::{lambda()#1}> >::operator()()
std::thread::_State_impl<std::thread::_Invoker<std::tuple<Timer::Timer(double, std::function<void ()>)::{lambda()#1}> > >::_M_run()
[unknown]
start_thread
[unknown]
-                publisher_membe (2611307)
    1

epoll_pwait
sd_event_wait
sd_event_run
[unknown]
[unknown]
__libc_start_main
[unknown]
-                systemd (1)
    1

[unknown]
select
[unknown]
start_thread
[unknown]
-                NvIpcDst (980)
    1

[unknown]
nvsipl::CNvFSensorPipeline::DoALGProcessing(nvsipl::INvMBuffer const*)
nvsipl::CNvFSubframePipeline::ThreadFunc()
nvsipl::CNvMThread::m_Func()
nvsipl::CNvMThread::m_FuncStatic(nvsipl::CNvMThread*)
std::thread::_State_impl<std::thread::_Invoker<std::tuple<void (*)(nvsipl::CNvMThread*), nvsipl::CNvMThread*> > >::_M_run()
[unknown]
start_thread
[unknown]
-                SIPL_VI_ISP_4 (981)
    1

[unknown]
nvsipl::CNvFusaISPBlock::ProcessISP(nvsipl::INvMBuffer*, nvsipl::INvMBuffer* const*)
nvsipl::CNvFSubframePipeline::SubmitIspRequest(nvsipl::INvMBuffer*, nvsipl::INvMBuffer* (&) [3])
nvsipl::CNvFSubframePipeline::ThreadFunc()
nvsipl::CNvMThread::m_Func()
nvsipl::CNvMThread::m_FuncStatic(nvsipl::CNvMThread*)
std::thread::_State_impl<std::thread::_Invoker<std::tuple<void (*)(nvsipl::CNvMThread*), nvsipl::CNvMThread*> > >::_M_run()
[unknown]
start_thread
[unknown]
-                SIPL_VI_ISP_6 (981)
    1

[unknown]
[unknown]
[unknown]
[unknown]
[unknown]
[unknown]
[unknown]
[unknown]
[unknown]
[unknown]
[unknown]
[unknown]
[unknown]
[unknown]
-                sh (2916690)
    1

std::chrono::_V2::steady_clock::now()
Timer::getTime()
Timer::Timer(double, std::function<void ()>)::{lambda()#1}::operator()() const
void std::__invoke_impl<void, Timer::Timer(double, std::function<void ()>)::{lambda()#1}>(std::__invoke_other, Timer::Timer(double, std::function<void ()>)::{lambda()#1}&&)
std::__invoke_result<Timer::Timer(double, std::function<void ()>)::{lambda()#1}>::type std::__invoke<Timer::Timer(double, std::function<void ()>)::{lambda()#1}>(std::__invoke_result&&, (Timer::Timer(double, std::function<void ()>)::{lambda()#1}&&)...)
void std::thread::_Invoker<std::tuple<Timer::Timer(double, std::function<void ()>)::{lambda()#1}> >::_M_invoke<0ul>(std::_Index_tuple<0ul>)
std::thread::_Invoker<std::tuple<Timer::Timer(double, std::function<void ()>)::{lambda()#1}> >::operator()()
std::thread::_State_impl<std::thread::_Invoker<std::tuple<Timer::Timer(double, std::function<void ()>)::{lambda()#1}> > >::_M_run()
[unknown]
start_thread
[unknown]
-                publisher_membe (2611307)
    1

[unknown]
[unknown]
[unknown]
[unknown]
[unknown]
[unknown]
[unknown]
[unknown]
[unknown]
[unknown]
-                apt-get (2916638)
    1

clock_gettime
std::chrono::_V2::steady_clock::now()
Timer::getTime()
Timer::Timer(double, std::function<void ()>)::{lambda()#1}::operator()() const
void std::__invoke_impl<void, Timer::Timer(double, std::function<void ()>)::{lambda()#1}>(std::__invoke_other, Timer::Timer(double, std::function<void ()>)::{lambda()#1}&&)
std::__invoke_result<Timer::Timer(double, std::function<void ()>)::{lambda()#1}>::type std::__invoke<Timer::Timer(double, std::function<void ()>)::{lambda()#1}>(std::__invoke_result&&, (Timer::Timer(double, std::function<void ()>)::{lambda()#1}&&)...)
void std::thread::_Invoker<std::tuple<Timer::Timer(double, std::function<void ()>)::{lambda()#1}> >::_M_invoke<0ul>(std::_Index_tuple<0ul>)
std::thread::_Invoker<std::tuple<Timer::Timer(double, std::function<void ()>)::{lambda()#1}> >::operator()()
std::thread::_State_impl<std::thread::_Invoker<std::tuple<Timer::Timer(double, std::function<void ()>)::{lambda()#1}> > >::_M_run()
[unknown]
start_thread
[unknown]
-                publisher_membe (2611307)

And here is some log for offcputime.bt using bpftrace:

@[
0xffff800010092370
0xffff8000111bcb2c
0xffff800010022ac4
0xffff800010012acc
,
0xffffab46ccd4
nvsipl::CNvFusaISPBlock::ProcessISP(nvsipl::INvMBuffer*, nvsipl::INvMBuffer* const*)+436
nvsipl::CNvFSubframePipeline::SubmitIspRequest(nvsipl::INvMBuffer*, nvsipl::INvMBuffer* (&) [3])+1088
nvsipl::CNvFSubframePipeline::ThreadFunc()+180
nvsipl::CNvMThread::m_Func()+368
nvsipl::CNvMThread::m_FuncStatic(nvsipl::CNvMThread*)+64
std::thread::_State_impl<std::thread::_Invoker<std::tuple<void ()(nvsipl::CNvMThread), nvsipl::CNvMThread*> > >::_M_run()+20
0xffffab850fac
start_thread+388
0xffffab61349c
, SIPL_VI_ISP_1]: 20157
@[
0xffff800010092370
0xffff8000111bcb2c
0xffff800010022ac4
0xffff800010012acc
,
0xffff9968dd28
0xffff9968e58c
0xffff99692b18
0xffff99693030
0xffff99616f3c
0xffff99616fd8
0xaaaadce44a14
0xffff9962eb80
0xffff99758bdc
0xaaaadce43b00
0xffff99225e10
0xaaaadce43bf0
, apt-get]: 20158
@[
0xffff800010092370
0xffff8000111bcb2c
0xffff8000111be934
0xffff8000111bea6c
0xffff8000111bec94
0xffff8000111bf410
0xffff8000111bf8e0
0xffff8000111bfb6c
0xffff8000111be718
0xffff80001025ebcc
0xffff800010260328
0xffff80001026037c
0xffff8000102644b8
0xffff8000102646a4
0xffff80001002d240
0xffff80001002d3bc
0xffff8000111b92c0
0xffff8000111b9860
0xffff800010011f84
,
0xffff91705fd8
0xffff916f9f18
0xffff916f1f2c
0xffff91703f10
0xffff916efa5c
0xffff916efd88
0xffff916ef108
, sh]: 20189
@[
0xffff800010092370
0xffff8000111bca0c
0xffff8000111bca90
0xffff800010096014
0xffff8000111c1b00
0xffff8000100b0598
0xffff8000100b0998
0xffff8000102de370
0xffff8000102ce524
0xffff8000102cf428
0xffff8000102d1eac
0xffff8000102d1ef4
0xffff80001002d240
0xffff80001002d3bc
0xffff8000111b92c0
0xffff8000111b9860
0xffff800010011f84
,
0xffffb24eec80
0xffffb249af4c
0xffffb249a358
0xffffb249bff8
0xffffb249a190
0xffffb248e790
0xaaaac0546b64
0xaaaac0546a50
0xaaaac05439f8
0xffffb246147c
0xffffb246160c
0xffffb244be14
0xaaaac0542bfc
, head]: 20190
@[
0xffff800010092370
0xffff8000111bcb2c
0xffff800010022ac4
0xffff800010012acc
,
0xaaaae3e1ef74
0xffff9b69078c
start_thread+388
0xffff969a849c
, PVRControlTask]: 20190
@[
0xffff800010092370
0xffff8000111bd050
0xffff8000111bc0fc
0xffff800010011d3c
0xffff800010577200
0xffff8000105787b8
0xffff800010581348
0xffff800010583284
0xffff80001056e200
0xffff80001055e8b4
0xffff8000102db588
0xffff8000102dbbf4
0xffff8000102dbcb8
0xffff80001002d240
0xffff80001002d3bc
0xffff8000111b92c0
0xffff8000111b9860
0xffff800010011f84
,
0xffffabdb81e8
, AA_DiagApp_M]: 20221
@[
0xffff800010092370
0xffff8000111bcb2c
0xffff800010022ac4
0xffff800010012acc
,
0xffff9113d314
0xffff911650a8
0xffff91165128
0xffff91554af8
0xffff915549c0
0xffff91557748
0xffff91552b74
0xffff91552cb0
0xffff9155355c
0xffff919e33bc
0xffff919e3220
0xffff9209f298
0xffff91552b30
0x4388c8
0x4401b8
0x43ebe8
0x43d554
0x43c0ec
0x43ae28
0x439964
0x43762c
0x44f78c
0x44f6b0
0x44f5c4
0x44f220
0x44ec24
0xffff91408fac
0xffff91787624
0xffff911c949c
, publisher_membe]: 20221
@[
0xffff800010092370
0xffff8000111bcb2c
0xffff800010022ac4
0xffff800010012acc
,
0xaaaab0fb3e24
0xaaaab0f52cc4
0xaaaab0f3e9c4
__libc_start_main+232
0xaaaab0f4080c
, sshd]: 20222
@[, , SIPL_VI_ISP_4]: 20253
@[, , publisher_membe]: 20253
@[, , SIPL_VI_ISP_4]: 20285
@[, , (md.daily)]: 20317
@[, , cat]: 20318
@[, , PipelineEvent]: 20318
@[, , grep]: 20318
@[, , MediaConsumer6]: 20318
@[, , apt-get]: 20349
@[, , publisher_membe]: 20349
@[, , date.sh]: 20349
@[, , sh]: 20381
@[, , sh]: 20381
@[, , nvgpu_channel_p]: 20382
@[, , date]: 20413
@[, , apt-config]: 20413
@[, , sshd]: 20445
@[, , SIPL_VI_ISP_7]: 20477
@[, , ps]: 20477
@[, , apt-helper]: 20477
@[, , date]: 20478
@[, , cat]: 20478
@[, , apt-config]: 20478
@[, , PVRControlTask]: 20509
@[, , SIPL_VI_ISP_1]: 20510
@[, , cat]: 20510
@[, , nvidia-modeset/]: 20510
@[, , cat]: 20541
@[, , SIPL_VI_ISP_2]: 20541
@[, , publisher_membe]: 20605
@[, , Xorg]: 20605
@[, , xcompmgr]: 20606
@[, , SIPL_VI_ISP_7]: 20637
@[, , cat]: 20669
@[, , adg_comd]: 20701
@[, , apt.systemd.dai]: 20702
@[, , sh]: 20702
@[, , date]: 20733
@[, , SIPL_VI_ISP_5]: 20733
@[, , bash]: 20765
@[, , SIPL_VI_ISP_5]: 20797
@[, , apt.systemd.dai]: 20798
@[, , cat]: 20829

If you need anything more, please let me know.

Dear @haomiao.wang,
This forum is intended to support issues relatrd to P3710. Could you please contact Desay for support on IPU04?

@SivaRamaKrishnaNV All right, Thank you.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.