[BUG] target-docker-container running driveworks sample_hello_world failed

Required Info:

  • Software Version
    DRIVE OS 6.0.6
  • Target OS
    Linux
  • SDK Manager Version
    1.9.2.10884
  • Host Machine Version
    native Ubuntu Linux 20.04 Host installed with DRIVE OS DOCKER Containers

Describe the bug

following with this topic https://developer.nvidia.com/blog/running-docker-containers-directly-on-nvidia-drive-agx-orin/#entry-content-comments.

in target-docker-container, sudo /usr/local/driveworks/bin/sample_hello_world failed with many errors.

To Reproduce

# start and into the container
./docker/run/orin_start.sh
./docker/run/orin_into.sh

the csv in /etc/nvidia-container-runtime/host-files-for-container.d/ is here

the keypoint of orin_start.sh is

+ docker run --runtime nvidia --gpus all -it -d --privileged --name gw_orin_20.04_nvidia -e DOCKER_USER=nvidia -e USER=nvidia -e DOCKER_USER_ID=1000 -e DOCKER_GRP=nvidia -e DOCKER_GRP_ID=1000 -e DOCKER_IMG=arm64v8/ros:foxy -e USE_GPU=1 -e NVIDIA_VISIBLE_DEVICES=all -e NVIDIA_DRIVER_CAPABILITIES=compute,graphics,video,utility,display -e DISPLAY -v /home/nvidia/zhensheng/orin_ws/nv_driveworks_demo/target:/target -v /usr/local/driveworks-5.10:/usr/local/driveworks-5.10 -v /usr/local/cuda-11.4:/usr/local/cuda-11.4 -v /dev:/dev -v /home/nvidia/zhensheng/cuda-sample:/home/nvidia/zhensheng/cuda-sample -v /home/nvidia/.cache:/home/nvidia/.cache -v /dev/bus/usb:/dev/bus/usb -v /media:/media -v /tmp/.X11-unix:/tmp/.X11-unix:rw -v /etc/localtime:/etc/localtime:ro -v /usr/src:/usr/src -v /lib/mgaules:/lib/mgaules --net host --ipc host --cap-add SYS_ADMIN --cap-add SYS_PTRACE -w /target --add-host in_orin_docker:127.0.0.1 --add-host tegra-ubuntu:127.0.0.1 --hostname in_orin_docker --shm-size 2G -v /dev/null:/dev/raw1394 arm64v8/ros:foxy /bin/bash

Expected behavior

# in target-orin-host
/usr/local/driveworks/bin/sample_hello_world 
*************************************************
Welcome to Driveworks SDK
[04-02-2023 08:03:35] Platform: Detected Drive Orin P3710
[04-02-2023 08:03:35] TimeSource: monotonic epoch time offset is 1675150951129556
[04-02-2023 08:03:35] TimeSourceVibranteLinux: detect valid PTP interface mgbe2_0
[04-02-2023 08:03:35] TimeSource: Could not detect valid PTP time source at nvpps. Fallback to mgbe2_0
[04-02-2023 08:03:35] PTP Time is available from Eth Driver
[04-02-2023 08:03:35] Adding variable DW_Base:DW_Version
[04-02-2023 08:03:35] Added variable DW_Base:DW_Version
[04-02-2023 08:03:35] Platform: number of GPU devices detected 1
[04-02-2023 08:03:35] Platform: currently selected GPU device 0, Resource Data Dir: trt_08_05_10_03, Arch: ga10b
[04-02-2023 08:03:35] Platform: currently selected GPU device integrated ID 0
[04-02-2023 08:03:35] CUDLAEngine:getDLACount: CUDLA version is = 1003000
[04-02-2023 08:03:35] CUDLAEngine:getDLACount: Number of DLA devices = 2
[04-02-2023 08:03:35] Context::mountResourceCandidateDataPath resource FAILED to mount from './resources': VirtualFileSystem: Failed to mount './resources/resources.pak'
[04-02-2023 08:03:35] Context::mountResourceCandidateDataPath resource FAILED to mount from '/home/nvidia/zhensheng/cuda-sample/samples/1_Utilities/deviceQuery/data': VirtualFileSystem: Failed to mount '/home/nvidia/zhensheng/cuda-sample/samples/1_Utilities/deviceQuery/data/resources.pak'
[04-02-2023 08:03:35] Context::findDataRootInPathWalk data/DATA_ROOT found at: /usr/local/driveworks-5.10/bin/../data
[04-02-2023 08:03:35] Context::mountResourceCandidateDataPath resource FAILED to mount from '/usr/local/driveworks-5.10/bin/../data': VirtualFileSystem: Failed to mount '/usr/local/driveworks-5.10/bin/../data/resources.pak'
[04-02-2023 08:03:35] Context::findDataRootInPathWalk data/DATA_ROOT found at: /usr/local/driveworks-5.10/data
[04-02-2023 08:03:35] Context::mountResourceCandidateDataPath resource FAILED to mount from '/usr/local/driveworks-5.10/data': VirtualFileSystem: Failed to mount '/usr/local/driveworks-5.10/data/resources.pak'
[04-02-2023 08:03:35] Context::findResourcesPackageInPathWalk: Could not find ./resources/resources.pak in upto 7 parent directories from /usr/local/driveworks-5.10/bin/../lib/libdw_base.so.5.10
[04-02-2023 08:03:35] Context::findResourcesPackageInPathWalk: Could not find ./resources/resources.pak in upto 7 parent directories from /usr/local/driveworks-5.10/targets/aarch64-Linux/lib/libdw_base.so.5.10
[04-02-2023 08:03:35] SDK: No resources(.pak) mounted, some modules will not function properly
[04-02-2023 08:03:35] egl::Display: found 1 EGL devices
[04-02-2023 08:03:35] egl::Display: use drm device: drm-nvdc
[04-02-2023 08:03:36] TimeSource: monotonic epoch time offset is 1675150951129557
[04-02-2023 08:03:36] TimeSourceVibranteLinux: detect valid PTP interface mgbe2_0
[04-02-2023 08:03:36] TimeSource: Could not detect valid PTP time source at nvpps. Fallback to mgbe2_0
[04-02-2023 08:03:36] PTP Time is available from Eth Driver
[04-02-2023 08:03:36] Initialize DriveWorks SDK v5.10.87
[04-02-2023 08:03:36] Release build with GNU 9.3.0 from buildbrain-branch-0-g9a5b4670e12 against Drive PDK v6.0.6.0
Context of Driveworks SDK successfully initialized.
Version: 5.10.87
GPU devices detected: 1
[04-02-2023 08:03:36] Platform: currently selected GPU device 0, Resource Data Dir: trt_08_05_10_03, Arch: ga10b
[04-02-2023 08:03:36] Platform: currently selected GPU device integrated ID 0
----------------------------------------------
Device: 0, Orin
CUDA Driver Version / Runtime Version : 11.8 / 11.4
CUDA Capability Major/Minor version number: 8.7
Total amount of global memory in MBytes:28458
Memory Clock rate Khz: 1275000
Memory Bus Width bits: 128
L2 Cache Size: 4194304
Maximum 1D Texture Dimension Size (x): 131072
Maximum 2D Texture Dimension Size (x,y): 131072, 65536
Maximum 3D Texture Dimension Size (x,y,z): 16384, 16384, 16384
Maximum Layered 1D Texture Size, (x): 32768 num: 2048
Maximum Layered 2D Texture Size, (x,y): 32768, 32768 num: 2048
Total amount of constant memory bytes: 65536
Total amount of shared memory per block bytes: 49152
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 1536
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): 1024,1024,64
Max dimension size of a grid size (x,y,z): 2147483647,65535,65535
Maximum memory pitch bytes: 2147483647
Texture alignment bytes: 512
Concurrent copy and kernel execution: Yes, copy engines num: 2
Run time limit on kernels: No
Integrated GPU sharing Host Memory: Yes
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Device PCI Domain ID: 0, Device PCI Bus ID: 0, Device PCI location ID: 0
Compute Mode: Default (multiple host threads can use ::cudaSetDevice() with device simultaneously)
Concurrent kernels: 1
Concurrent memory: 0

[04-02-2023 08:03:36] Releasing Driveworks SDK Context
Happy autonomous driving!

Actual behavior

# in target-docker-container
sudo /usr/local/driveworks/bin/sample_hello_world 
*************************************************
Welcome to Driveworks SDK
[04-02-2023 08:05:43] Platform: Detected Drive Orin P3710
[04-02-2023 08:05:43] TimeSource: monotonic epoch time offset is 1675150951129556
[04-02-2023 08:05:43] TimeSourceVibranteLinux: detect valid PTP interface mgbe2_0
[04-02-2023 08:05:43] TimeSource Nvpss : PTP ioctl returned error. Synchronized time will not be available from this timesource.
[04-02-2023 08:05:43] TimeSource: Could not detect valid PTP time source at nvpps. Fallback to mgbe2_0
[04-02-2023 08:05:43] PTP Time is available from Eth Driver
[04-02-2023 08:05:43] Adding variable DW_Base:DW_Version
[04-02-2023 08:05:43] Added variable DW_Base:DW_Version
[04-02-2023 08:05:43] Platform: number of GPU devices detected 1
[04-02-2023 08:05:43] Platform: currently selected GPU device 0, Resource Data Dir: trt_08_05_10_03, Arch: ga10b
[04-02-2023 08:05:43] Platform: currently selected GPU device integrated ID 0
[04-02-2023 08:05:43] CUDLAEngine:getDLACount: Error in cudlaGetVersion = 2147483647
[04-02-2023 08:05:43] CUDLAEngine:getDLACount: Error in cudlaDeviceGetCount = 2147483647
[04-02-2023 08:05:43] Context::mountResourceCandidateDataPath resource FAILED to mount from './resources': VirtualFileSystem: Failed to mount './resources/resources.pak'
[04-02-2023 08:05:43] Context::mountResourceCandidateDataPath resource FAILED to mount from '/home/nvidia/data': VirtualFileSystem: Failed to mount '/home/nvidia/data/resources.pak'
[04-02-2023 08:05:43] Context::findDataRootInPathWalk data/DATA_ROOT found at: /usr/local/driveworks-5.10/bin/../data
[04-02-2023 08:05:43] Context::mountResourceCandidateDataPath resource FAILED to mount from '/usr/local/driveworks-5.10/bin/../data': VirtualFileSystem: Failed to mount '/usr/local/driveworks-5.10/bin/../data/resources.pak'
[04-02-2023 08:05:43] Context::findDataRootInPathWalk data/DATA_ROOT found at: /usr/local/driveworks-5.10/data
[04-02-2023 08:05:43] Context::mountResourceCandidateDataPath resource FAILED to mount from '/usr/local/driveworks-5.10/data': VirtualFileSystem: Failed to mount '/usr/local/driveworks-5.10/data/resources.pak'
[04-02-2023 08:05:43] Context::findResourcesPackageInPathWalk: Could not find ./resources/resources.pak in upto 7 parent directories from /usr/local/driveworks-5.10/bin/../lib/libdw_base.so.5.10
[04-02-2023 08:05:43] Context::findResourcesPackageInPathWalk: Could not find ./resources/resources.pak in upto 7 parent directories from /usr/local/driveworks-5.10/targets/aarch64-Linux/lib/libdw_base.so.5.10
[04-02-2023 08:05:43] SDK: No resources(.pak) mounted, some modules will not function properly
[04-02-2023 08:05:43] egl::Display: found 1 EGL devices
[04-02-2023 08:05:43] egl::Display: use drm device: drm-nvdc
[04-02-2023 08:05:43] egl::Display: Could not init EGL
[04-02-2023 08:05:43] Driveworks exception thrown: DW_CANNOT_CREATE_OBJECT: Could not init EGL

Cannot init SDK

Additional context

  1. The sudo permission also happen related with [BUG] target-docker-container running cuda-samples require unintended extra permission (nvidia.com)
  2. Could you help to provide the tutorial of how to use driveworks and dwcgf in the target-docker-container?

Dear @lizhensheng ,
May I know the docker you have used on target? Also, I doubt if support for other accelerators are enabled other than GPU. Let me check that.

Thanks for your reply!

I don’t understand, I guess the other accelerators can be enabled by mapping /dev to the container, and this feature can be done with nvidia-container-toolkit.

in this link NVIDIA Container Runtime on Jetson · NVIDIA/nvidia-docker Wiki · GitHub
nvidia-container-toolkit support many hardware accelerators in Jetson-agx-kits.

Thanks.

You will need to utilize ‘strace’ to find those dynamic libraries not explicitly listed in the dynamic section and add them into the drivers.csv file. Please find below an example drivers.csv file from running ‘sample_hello_world’ within docker container on the target with a future DW version. This may be useful for your reference.

dir, /usr/lib/firmware/tegra23x

lib, /usr/lib/libcuda.so.1

lib, /usr/lib/libnvrm_gpu.so

lib, /usr/lib/libnvrm_mem.so

lib, /usr/lib/libnvrm_sync.so

lib, /usr/lib/libnvrm_host1x.so

lib, /usr/lib/libnvos.so

lib, /usr/lib/libnvsocsys.so

lib, /usr/lib/libnvtegrahv.so

lib, /usr/lib/libnvsciipc.so

lib, /usr/lib/libnvrm_chip.so

lib, /usr/lib/libnvcucompat.so

lib, /lib/aarch64-linux-gnu/libEGL_nvidia.so.0

sym, /usr/lib/libcuda.so

lib, /usr/lib/libcuda.so.1

sym, /usr/lib/libnvscibuf.so

lib, /usr/lib/libnvscibuf.so.1

lib, /usr/lib/libnvscicommon.so.1

lib, /usr/lib/libnvsciipc.so

lib, /lib/aarch64-linux-gnu/libudev.so.1

lib, /lib/aarch64-linux-gnu/libusb-1.0.so.0

lib, /lib/aarch64-linux-gnu/librt.so.1

lib, /lib/aarch64-linux-gnu/libX11.so.6

lib, /lib/aarch64-linux-gnu/libXrandr.so.2

lib, /lib/aarch64-linux-gnu/libXinerama.so.1

lib, /lib/aarch64-linux-gnu/libXi.so.6

lib, /lib/aarch64-linux-gnu/libXcursor.so.1

lib, /usr/lib/libdrm.so.2

lib, /lib/aarch64-linux-gnu/libdl.so.2

lib, /lib/aarch64-linux-gnu/libpthread.so.0

lib, /lib/aarch64-linux-gnu/libXext.so.6

lib, /lib/aarch64-linux-gnu/libXxf86vm.so.1

lib, /lib/aarch64-linux-gnu/libGLESv2_nvidia.so.2

lib, /lib/aarch64-linux-gnu/libstdc++.so.6

lib, /lib/aarch64-linux-gnu/libm.so.6

lib, /lib/aarch64-linux-gnu/libgcc_s.so.1

lib, /lib/aarch64-linux-gnu/libc.so.6

lib, /usr/lib/libgnat-23.20220512.so

lib, /usr/lib/libnvrm_host1x.so

lib, /usr/lib/libnvdla_runtime.so

lib, /usr/lib/libnvidia-glsi.so.535.00

lib, /usr/lib/libnvrm_chip.so

lib, /usr/lib/libnvrm_surface.so

lib, /usr/lib/libnvrm_sync.so

lib, /usr/lib/libnvos.so

lib, /usr/lib/libnvrm_gpu.so

lib, /usr/lib/libnvrm_mem.so

lib, /usr/lib/libNvFsiCom.so

lib, /usr/lib/libnvmedia_iep_sci.so

lib, /usr/lib/libnvmedia2d.so

lib, /usr/lib/libnvmedialdc.so

lib, /usr/lib/libnvmedia_ijpe_sci.so

lib, /usr/lib/libnvmedia_ide_parser.so

lib, /usr/lib/libnvmedia_ide_sci.so

lib, /usr/lib/aarch64-linux-gnu/libz.so.1

lib, /usr/lib/libnvmedia_tensor.so

lib, /usr/lib/libnvmedia_dla.so

lib, /usr/lib/libnvscistream.so.1

lib, /usr/lib/aarch64-linux-gnu/libgomp.so.1

lib, /usr/lib/libnvsipl.so

lib, /usr/lib/libnvsipl_devblk.so

lib, /usr/lib/libnvsipl_query.so

lib, /usr/lib/libnvparser.so

lib, /usr/lib/aarch64-linux-gnu/libnvinfer.so.8

lib, /usr/lib/aarch64-linux-gnu/libnvonnxparser.so.8

lib, /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.8

lib, /usr/lib/aarch64-linux-gnu/libcudnn.so.8

lib, /usr/lib/aarch64-linux-gnu/libcudnn_adv_infer.so.8

lib, /usr/lib/aarch64-linux-gnu/libcudnn_adv_train.so.8

lib, /usr/lib/aarch64-linux-gnu/libcudnn_cnn_infer.so.8

lib, /usr/lib/aarch64-linux-gnu/libcudnn_cnn_train.so.8

lib, /usr/lib/aarch64-linux-gnu/libcudnn_ops_infer.so.8

lib, /usr/lib/aarch64-linux-gnu/libcudnn_ops_train.so.8

lib, /lib/aarch64-linux-gnu/libxcb.so.1

lib, /lib/aarch64-linux-gnu/libXrender.so.1

lib, /lib/aarch64-linux-gnu/libXfixes.so.3

lib, /usr/lib/libnvpvaintf.so

sym, /lib/aarch64-linux-gnu/libGL.so

sym, /lib/aarch64-linux-gnu/libGL.so.1

lib, /lib/aarch64-linux-gnu/libGL.so.1.7.0

lib, /lib/aarch64-linux-gnu/libGLX.so.0

lib, /lib/aarch64-linux-gnu/libGLdispatch.so.0

lib, /lib/aarch64-linux-gnu/libGLU.so.1

lib, /usr/lib/libnvsocsys.so

lib, /usr/lib/libnvidia-rmapi-tegra.so.535.00

lib, /usr/lib/libnvrm_interop_gpu.so

lib, /usr/lib/libnvtegrahv.so

lib, /usr/lib/libnvivc.so

lib, /usr/lib/libnvscievent.so

lib, /usr/lib/libnvvideo.so

lib, /usr/lib/libnvvic.so

lib, /usr/lib/libnvmedia_eglstream.so

lib, /usr/lib/libnvfusacap.so

lib, /usr/lib/libnvsipl_control.so

lib, /usr/lib/libnvsipl_devblk_cdi.so

lib, /usr/lib/libnvsipl_devblk_ddi.so

lib, /usr/lib/libnvsipl_devblk_crypto.so

lib, /usr/lib/libnvdla_compiler.so

lib, /lib/aarch64-linux-gnu/libXau.so.6

lib, /lib/aarch64-linux-gnu/libXdmcp.so.6

lib, /usr/lib/libnvpvaumd.so

lib, /usr/lib/libnvrm_stream.so

lib, /usr/lib/libnvisppg.so

lib, /usr/lib/libnvpkcs11.so

lib, /lib/aarch64-linux-gnu/libbsd.so.0

lib, /usr/lib/libnvisp.so

lib, /usr/lib/libteec.so

lib, /usr/lib/libnvvse.so

sym, /usr/lib/libnvscisync.so

lib, /usr/lib/libnvscisync.so.1

lib, /usr/lib/libnvcudla.so

lib, /usr/lib/libnvidia-eglcore.so.535.00

lib, /usr/lib/libnvdc.so

lib, /usr/lib/libnvimp.so

lib, /usr/lib/libnvddk_2d_v2.so

lib, /usr/lib/libnvddk_vic.so

1 Like

@VickNV Thanks for your reply!

I tried with your csv, but the error still exists in the version of DriveOS 6.0.6

Your csv should be good, and perhaps there is something wrong in my docker scripts.

Could you provide the full steps to reproduce your usage of this csv? Like docker_run.sh and more info of starting docker?

If you are encountering the following errors, it is possible that you have not added “lib, /usr/lib/libnvidia-eglcore.so.530.00” (the version may vary) in your drivers.csv. This is one of the necessary libraries that you can identify using ‘strace’.

Thanks for your info!

Usually I use ldd to find all libs the executable depends on. And all libs are found.

nvidia@in_orin_docker:/target$ ldd /usr/local/driveworks-5.10/bin/sample_hello_world 
        linux-vdso.so.1 (0x0000ffff94def000)
        libdrm.so.2 => /lib/aarch64-linux-gnu/libdrm.so.2 (0x0000ffff94d82000)
        libdriveworks_visualization.so.5.10 => /usr/local/driveworks-5.10/bin/../lib/libdriveworks_visualization.so.5.10 (0x0000ffff94c0c000)
        libdw_sensors.so.5.10 => /usr/local/driveworks-5.10/bin/../lib/libdw_sensors.so.5.10 (0x0000ffff93fde000)
        libnvsipl.so => /usr/lib/libnvsipl.so (0x0000ffff93f59000)
        libnvsipl_devblk.so => /usr/lib/libnvsipl_devblk.so (0x0000ffff93f29000)
        libnvsipl_query.so => /usr/lib/libnvsipl_query.so (0x0000ffff93eca000)
        libnvparser.so => /usr/lib/libnvparser.so (0x0000ffff93e81000)
        libdw_imageprocessing.so.5.10 => /usr/local/driveworks-5.10/bin/../lib/libdw_imageprocessing.so.5.10 (0x0000ffff927d0000)
        libGLESv2_nvidia.so.2 => /lib/aarch64-linux-gnu/libGLESv2_nvidia.so.2 (0x0000ffff9279f000)
        libdw_base.so.5.10 => /usr/local/driveworks-5.10/bin/../lib/libdw_base.so.5.10 (0x0000ffff91cc7000)
        libNvFsiCom.so => /usr/lib/libNvFsiCom.so (0x0000ffff91ca8000)
        libnvmedia_iep_sci.so => /usr/lib/libnvmedia_iep_sci.so (0x0000ffff91c89000)
        libnvmedia2d.so => /usr/lib/libnvmedia2d.so (0x0000ffff91c69000)
        libnvmedialdc.so => /usr/lib/libnvmedialdc.so (0x0000ffff91c47000)
        libnvmedia_ijpe_sci.so => /usr/lib/libnvmedia_ijpe_sci.so (0x0000ffff91c2c000)
        libnvmedia_ide_parser.so => /usr/lib/libnvmedia_ide_parser.so (0x0000ffff91c14000)
        libnvmedia_ide_sci.so => /usr/lib/libnvmedia_ide_sci.so (0x0000ffff91bf5000)
        libEGL_nvidia.so.0 => /lib/aarch64-linux-gnu/libEGL_nvidia.so.0 (0x0000ffff91ac5000)
        libz.so.1 => /usr/local/driveworks-5.10/bin/../lib/libz.so.1 (0x0000ffff91a98000)
        libcupva_host.so.2.0 => /usr/local/driveworks-5.10/bin/../lib/libcupva_host.so.2.0 (0x0000ffff91a35000)
        libudev.so.1 => /lib/aarch64-linux-gnu/libudev.so.1 (0x0000ffff919fb000)
        libusb-1.0.so.0 => /lib/aarch64-linux-gnu/libusb-1.0.so.0 (0x0000ffff919d1000)
        librt.so.1 => /lib/aarch64-linux-gnu/librt.so.1 (0x0000ffff919b7000)
        libnvmedia_tensor.so => /usr/lib/libnvmedia_tensor.so (0x0000ffff9199a000)
        libnvmedia_dla.so => /usr/lib/libnvmedia_dla.so (0x0000ffff91977000)
        libcudla.so.1 => /usr/local/cuda-11.4/targets/aarch64-linux/lib/libcudla.so.1 (0x0000ffff9193a000)
        libdwshared.so.5.10 => /usr/local/driveworks-5.10/bin/../lib/libdwshared.so.5.10 (0x0000ffff91809000)
        libdwpbwire.so => /usr/local/driveworks-5.10/bin/../lib/libdwpbwire.so (0x0000ffff917ed000)
        libnvscibuf.so.1 => /usr/lib/libnvscibuf.so.1 (0x0000ffff91777000)
        libnvsciipc.so => /usr/lib/libnvsciipc.so (0x0000ffff91747000)
        libnvscistream.so.1 => /usr/lib/libnvscistream.so.1 (0x0000ffff916c0000)
        libnvscisync.so.1 => /usr/lib/libnvscisync.so.1 (0x0000ffff9168c000)
        libcudart.so.11.0 => /usr/local/cuda-11.4/targets/aarch64-linux/lib/libcudart.so.11.0 (0x0000ffff915d0000)
        libcuda.so.1 => /usr/lib/libcuda.so.1 (0x0000ffff8fcf4000)
        libdwdynamicmemory.so.5.10 => /usr/local/driveworks-5.10/bin/../lib/libdwdynamicmemory.so.5.10 (0x0000ffff8fcdf000)
        libglfw.so => /usr/local/driveworks-5.10/bin/../lib/libglfw.so (0x0000ffff8fcb2000)
        libdl.so.2 => /lib/aarch64-linux-gnu/libdl.so.2 (0x0000ffff8fc9e000)
        libpthread.so.0 => /lib/aarch64-linux-gnu/libpthread.so.0 (0x0000ffff8fc6d000)
        libGL.so.1 => /lib/aarch64-linux-gnu/libGL.so.1 (0x0000ffff8fb76000)
        libGLX.so.0 => /lib/aarch64-linux-gnu/libGLX.so.0 (0x0000ffff8fb34000)
        libGLdispatch.so.0 => /lib/aarch64-linux-gnu/libGLdispatch.so.0 (0x0000ffff8f9a7000)
        libX11.so.6 => /lib/aarch64-linux-gnu/libX11.so.6 (0x0000ffff8f862000)
        libGLU.so.1 => /lib/aarch64-linux-gnu/libGLU.so.1 (0x0000ffff8f7ef000)
        libXext.so.6 => /lib/aarch64-linux-gnu/libXext.so.6 (0x0000ffff8f7cc000)
        libXxf86vm.so.1 => /lib/aarch64-linux-gnu/libXxf86vm.so.1 (0x0000ffff8f7b6000)
        libXinerama.so.1 => /lib/aarch64-linux-gnu/libXinerama.so.1 (0x0000ffff8f7a3000)
        libXrandr.so.2 => /lib/aarch64-linux-gnu/libXrandr.so.2 (0x0000ffff8f786000)
        libXcursor.so.1 => /lib/aarch64-linux-gnu/libXcursor.so.1 (0x0000ffff8f76c000)
        libXi.so.6 => /lib/aarch64-linux-gnu/libXi.so.6 (0x0000ffff8f74c000)
        libstdc++.so.6 => /lib/aarch64-linux-gnu/libstdc++.so.6 (0x0000ffff8f567000)
        libm.so.6 => /lib/aarch64-linux-gnu/libm.so.6 (0x0000ffff8f4bc000)
        libgomp.so.1 => /lib/aarch64-linux-gnu/libgomp.so.1 (0x0000ffff8f46e000)
        libgcc_s.so.1 => /lib/aarch64-linux-gnu/libgcc_s.so.1 (0x0000ffff8f448000)
        libc.so.6 => /lib/aarch64-linux-gnu/libc.so.6 (0x0000ffff8f2d5000)
        /lib/ld-linux-aarch64.so.1 (0x0000ffff94dbf000)
        libnvfusacap.so => /usr/lib/libnvfusacap.so (0x0000ffff8f294000)
        libnvsocsys.so => /usr/lib/libnvsocsys.so (0x0000ffff8f280000)
        libnvos.so => /usr/lib/libnvos.so (0x0000ffff8f258000)
        libnvsipl_control.so => /usr/lib/libnvsipl_control.so (0x0000ffff8f209000)
        libnvrm_mem.so => /usr/lib/libnvrm_mem.so (0x0000ffff8f1f0000)
        libnvrm_host1x.so => /usr/lib/libnvrm_host1x.so (0x0000ffff8f1ce000)
        libnvrm_surface.so => /usr/lib/libnvrm_surface.so (0x0000ffff8f196000)
        libnvsipl_devblk_cdi.so => /usr/lib/libnvsipl_devblk_cdi.so (0x0000ffff8f17a000)
        libnvsipl_devblk_ddi.so => /usr/lib/libnvsipl_devblk_ddi.so (0x0000ffff8f15f000)
        libnvrm_chip.so => /usr/lib/libnvrm_chip.so (0x0000ffff8f149000)
        libnvivc.so => /usr/lib/libnvivc.so (0x0000ffff8f134000)
        libnvscievent.so => /usr/lib/libnvscievent.so (0x0000ffff8f120000)
        libnvvideo.so => /usr/lib/libnvvideo.so (0x0000ffff8f039000)
        libnvvic.so => /usr/lib/libnvvic.so (0x0000ffff8f00e000)
        libnvidia-glsi.so.530.00 => /usr/lib/libnvidia-glsi.so.530.00 (0x0000ffff8ef79000)
        libnvrm_sync.so => /usr/lib/libnvrm_sync.so (0x0000ffff8ef61000)
        libnvpvaintf.so => /usr/lib/libnvpvaintf.so (0x0000ffff8ef3f000)
        libnvmedia_eglstream.so => /usr/lib/libnvmedia_eglstream.so (0x0000ffff8ef28000)
        libnvdla_runtime.so => /usr/lib/libnvdla_runtime.so (0x0000ffff8e8cb000)
        libnvrm_gpu.so => /usr/lib/libnvrm_gpu.so (0x0000ffff8e85e000)
        libnvscicommon.so.1 => /usr/lib/libnvscicommon.so.1 (0x0000ffff8e848000)
        libxcb.so.1 => /lib/aarch64-linux-gnu/libxcb.so.1 (0x0000ffff8e811000)
        libXrender.so.1 => /lib/aarch64-linux-gnu/libXrender.so.1 (0x0000ffff8e7f6000)
        libXfixes.so.3 => /lib/aarch64-linux-gnu/libXfixes.so.3 (0x0000ffff8e7e0000)
        libnvisppg.so => /usr/lib/libnvisppg.so (0x0000ffff8e733000)
        libnvrm_stream.so => /usr/lib/libnvrm_stream.so (0x0000ffff8e71b000)
        libnvidia-rmapi-tegra.so.530.00 => /usr/lib/libnvidia-rmapi-tegra.so.530.00 (0x0000ffff8e6cf000)
        libnvrm_interop_gpu.so => /usr/lib/libnvrm_interop_gpu.so (0x0000ffff8e6bb000)
        libnvpvaumd.so => /usr/lib/libnvpvaumd.so (0x0000ffff8e6a3000)
        libnvtegrahv.so => /usr/lib/libnvtegrahv.so (0x0000ffff8e690000)
        libXau.so.6 => /lib/aarch64-linux-gnu/libXau.so.6 (0x0000ffff8e67c000)
        libXdmcp.so.6 => /lib/aarch64-linux-gnu/libXdmcp.so.6 (0x0000ffff8e664000)
        libnvisp.so => /usr/lib/libnvisp.so (0x0000ffff8e5b3000)
        libbsd.so.0 => /lib/aarch64-linux-gnu/libbsd.so.0 (0x0000ffff8e58c000)

And there is no evidence that it depends on eglcore.so

nvidia@in_orin_docker:/target$ ldd /usr/local/driveworks-5.10/bin/sample_hello_world  | grep eglcore
nvidia@in_orin_docker:/target$ 

Could you provide the usage of strace to identify all of the libs it depends?

Thanks.

As I mentioned earlier, ‘strace’ can be used to find those dynamic libraries used by a program but not explicitly listed in its dynamic section.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.