CUDA Libraries on Host Device & Drive AGX Orin

Please provide the following info (tick the boxes after creating this topic):
Software Version
DRIVE OS 6.0.8.1
DRIVE OS 6.0.6
DRIVE OS 6.0.5
DRIVE OS 6.0.4 (rev. 1)
DRIVE OS 6.0.4 SDK
other

Target Operating System
Linux
QNX
other

Hardware Platform
DRIVE AGX Orin Developer Kit (940-63710-0010-300)
DRIVE AGX Orin Developer Kit (940-63710-0010-200)
DRIVE AGX Orin Developer Kit (940-63710-0010-100)
DRIVE AGX Orin Developer Kit (940-63710-0010-D00)
DRIVE AGX Orin Developer Kit (940-63710-0010-C00)
DRIVE AGX Orin Developer Kit (not sure its number)
other

SDK Manager Version
1.9.3.10904
other

Host Machine Version
native Ubuntu Linux 20.04 Host installed with SDK Manager
native Ubuntu Linux 20.04 Host installed with DRIVE OS Docker Containers
native Ubuntu Linux 18.04 Host installed with DRIVE OS Docker Containers
other

I am a beginner trying run TensorRT samples in C++ on my devices - but I am facing library import issues on both the host and target device.

To Run TRT Inference Sample:
The script I am trying to compile (derived from Hello World) is simply library imports -

#include <NvInfer.h>
#include <NvOnnxParser.h>
#include <NvUffParser.h>
#include <iostream>

I attempt to run these on both the host and the target device.

  • HOST

    I ran the script in the DRIVE OS Docker container. The error I received is -

    root@6.0.8.1-0006-build-linux-sdk:/drive# g++ try.cpp
    In file included from /usr/include/x86_64-linux-gnu/NvInferRuntimeCommon.h:26,
                     from /usr/include/x86_64-linux-gnu/NvInferLegacyDims.h:16,
                     from /usr/include/x86_64-linux-gnu/NvInfer.h:16,
                     from try.cpp:1:
    /usr/include/x86_64-linux-gnu/NvInferRuntimeBase.h:19:10: fatal error: cuda_runtime_api.h: No such file or directory
       19 | #include <cuda_runtime_api.h>
          |          ^~~~~~~~~~~~~~~~~~~~
    compilation terminated.
    

    Upon searching for the header file, I am faced with options in certain directories.

    root@6.0.8.1-0006-build-linux-sdk:/drive# sudo find / -name cuda_runtime_api.h
    /usr/local/cuda-11.4/targets/x86_64-linux/include/cuda_runtime_api.h
    /usr/local/cuda-11.4/targets/aarch64-linux/include/cuda_runtime_api.h
    /drive/drive-linux/filesystem/targetfs/usr/local/cuda-11.4/targets/aarch64-linux/include/cuda_runtime_api.h
    

    Upon referring to similar issues [1] and [2], the additional information i can provide is that -

    root@6.0.8.1-0006-build-linux-sdk:/drive# ls -l /usr/local/cuda
    lrwxrwxrwx 1 root root 22 Aug 28 06:18 /usr/local/cuda -> /etc/alternatives/cuda
    

    /usr/local/cuda/ links to a cuda directory in /etc/.

  • TARGET DEVICE
    I ran the script on the Drive AGX Orin device. The error I received is -

    gopalan@tegra-ubuntu:~$ g++ try.cpp
    try.cpp:1:10: fatal error: NvInfer.h: No such file or directory
        1 | #include <NvInfer.h>
          |          ^~~~~~~~~~~
    compilation terminated.
    

    The header file is not found in the system, but there is some version of libnvinfer installed on the device -

    gopalan@tegra-ubuntu:~$ sudo find / -name NvInfer.h
    gopalan@tegra-ubuntu:~$ sudo find / -name *nvinfer*
    /usr/share/doc/libnvinfer8
    /usr/share/doc/libnvinfer-bin
    /usr/share/doc/libnvinfer-plugin8
    /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.8
    /usr/lib/aarch64-linux-gnu/do_not_link_against_nvinfer_builder_resource
    /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.8.5.10
    /usr/lib/aarch64-linux-gnu/libnvinfer_builder_resource.so.8.5.10
    /usr/lib/aarch64-linux-gnu/libnvinfer.so.8.5.10
    /usr/lib/aarch64-linux-gnu/libnvinfer.so.8
    /var/lib/dpkg/info/libnvinfer-bin.md5sums
    /var/lib/dpkg/info/libnvinfer8.shlibs
    /var/lib/dpkg/info/libnvinfer-plugin8.triggers
    /var/lib/dpkg/info/libnvinfer8.md5sums
    /var/lib/dpkg/info/libnvinfer-bin.list
    /var/lib/dpkg/info/libnvinfer8.triggers
    /var/lib/dpkg/info/libnvinfer-plugin8.shlibs
    /var/lib/dpkg/info/libnvinfer-plugin8.md5sums
    /var/lib/dpkg/info/libnvinfer8.list
    /var/lib/dpkg/info/libnvinfer-plugin8.list
    

I am unsure how to proceed. I would love it if you could clarify -

  1. The purpose of /usr/local/cuda-11.4/targets/ on the host - i.e. if there are certain header files specific to the architecture, how is one supposed to utilize them? Are there any guidelines? Since there are certain header files present there which are not in /usr/local/cuda-11.4/include/.

  2. Is the Drive OS Docker Container self-sufficient? Does it come with all the necessary components required to build a TRT engine and run inference for a sample similar to Hello World? Is there any prerequisite steps needed to execute such a script?

  3. Similarly, is the Drive AGX Orin unit (flashed with Drive OS through the Drive OS Docker from the host device) sufficiently equipped to execute inference similar to Hello World? Assuming here that the TRT engine for inference is built and transferred from the host device to the target.

I ask these as any inference script must include the header files that I am trying to include - which throws an error on both devices for me.
Please help. Thanks!

1 Like

Have you referred to the Sample Support Guide and the ‘Running C++ Samples on Linux’ section in it?

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.