Questions about compiling tensorrt in C++ in Jetson Orin NX 16GB

• Hardware Platform (Jetson / GPU) : Jetson Orin NX 16GB
• DeepStream Version : 6.2
• JetPack Version (valid for Jetson only) : 5.1.1 [L4T 35.3.1]
• TensorRT Version : 5.1.1 ( jtop )
• NVIDIA GPU Driver Version (valid for GPU only) : 5.10.104-tegra

I referenced <NvInfer.h> in my C++ program and created the corresponding context:

// infer initialized
IRuntime* runtime = createInferRuntime(logger);

std::string engine_file_path = "./models/model_trt.trt";
std::ifstream engine_file(engine_file_path, std::ios::binary);
engine_file.seekg(0, std::ios::end);
const size_t engine_size = engine_file.tellg();
engine_file.seekg(0, std::ios::beg);
std::vector<char> engine_data(engine_size);, engine_size);

const char* modelData =;
size_t modelSize = engine_data.size();

ICudaEngine* engine = runtime->deserializeCudaEngine(modelData, modelSize);
IExecutionContext *context = engine->createExecutionContext();

The CMakeLists.txt:

cmake_minimum_required(VERSION 3.5)

find_package(OpenCV REQUIRED)
find_package(CUDA REQUIRED)
find_package(TensorRT REQUIRED)
find_package(ONNX REQUIRED)

# Add executable
add_executable(rtsp-streams main.cpp)
# Link libraries
target_link_libraries(rtsp-streams ${OpenCV_LIBS} ${CUDA_LIBRARIES} ${TensorRT_LIBRARIES} ${ONNX_LIBRARIES} pthread)
# Include directories
target_include_directories(rtsp-streams PRIVATE ${OpenCV_INCLUDE_DIRS} ${CUDA_INCLUDE_DIRS} ${TensorRT_INCLUDE_DIRS} ${ONNX_INCLUDE_DIRS})

Then, build this:

mkdir build
cd build
cmake ..

And then, the output:

(rgzy) root@rgzy:/rg/gst-cv-1/build# cmake ..
-- Found CUDA: /usr/local/cuda (found suitable exact version "11.4") 
-- Found CUDA: /usr/local/cuda (found version "11.4") 
CMake Error at CMakeLists.txt:6 (find_package):
  By not providing "FindTensorRT.cmake" in CMAKE_MODULE_PATH this project has
  asked CMake to find a package configuration file provided by "TensorRT",
  but CMake did not find one.

  Could not find a package configuration file provided by "TensorRT" with any
  of the following names:


  Add the installation prefix of "TensorRT" to CMAKE_PREFIX_PATH or set
  "TensorRT_DIR" to a directory containing one of the above files.  If
  "TensorRT" provides a separate development package or SDK, be sure it has
  been installed.

-- Configuring incomplete, errors occurred!
See also "/rg/gst-cv-1/build/CMakeFiles/CMakeOutput.log".
See also "/rg/gst-cv-1/build/CMakeFiles/CMakeError.log".

And I tried to use sudo find / -name TensorRTConfig.cmake and sudo find / -name tensorrt-config.cmake to find the TensorRTConfig.cmake and tensorrt-config.cmake, and did not find.

The output of dpkg -l | grep TensorRT:

(rgzy5) root@rgzy:/rg/gst-cv-1# dpkg -l | grep TensorRT
ii  graphsurgeon-tf                            8.5.2-1+cuda11.4                     arm64        GraphSurgeon for TensorRT package
ii  libnvinfer-bin                             8.5.2-1+cuda11.4                     arm64        TensorRT binaries
ii  libnvinfer-dev                             8.5.2-1+cuda11.4                     arm64        TensorRT development libraries and headers
ii  libnvinfer-plugin-dev                      8.5.2-1+cuda11.4                     arm64        TensorRT plugin libraries
ii  libnvinfer-plugin8                         8.5.2-1+cuda11.4                     arm64        TensorRT plugin libraries
ii  libnvinfer-samples                         8.5.2-1+cuda11.4                     all          TensorRT samples
ii  libnvinfer8                                8.5.2-1+cuda11.4                     arm64        TensorRT runtime libraries
ii  libnvonnxparsers-dev                       8.5.2-1+cuda11.4                     arm64        TensorRT ONNX libraries
ii  libnvonnxparsers8                          8.5.2-1+cuda11.4                     arm64        TensorRT ONNX libraries
ii  libnvparsers-dev                           8.5.2-1+cuda11.4                     arm64        TensorRT parsers libraries
ii  libnvparsers8                              8.5.2-1+cuda11.4                     arm64        TensorRT parsers libraries
ii  nvidia-tensorrt                            5.1.1-b56                            arm64        NVIDIA TensorRT Meta Package
ii  nvidia-tensorrt-dev                        5.1.1-b56                            arm64        NVIDIA TensorRT dev Meta Package
ii  onnx-graphsurgeon                          8.5.2-1+cuda11.4                     arm64        ONNX GraphSurgeon for TensorRT package
ii  python3-libnvinfer                         8.5.2-1+cuda11.4                     arm64        Python 3 bindings for TensorRT
ii  python3-libnvinfer-dev                     8.5.2-1+cuda11.4                     arm64        Python 3 development package for TensorRT
ii  tensorrt                                            arm64        Meta package for TensorRT
ii  uff-converter-tf                           8.5.2-1+cuda11.4                     arm64        UFF converter for TensorRT package
  1. graphsurgeon-tf, libnvinfer-bin, libnvinfer-dev, … is in the /usr/share/doc/
  2. Why are there two tensorrt versions (8.5.2-1 and 5.1.1-b56, jtop command also is the 5.1.1)?

There is no lib in /usr/src/tensorrt:

(rgzy5) root@rgzy:/usr/src/tensorrt# ls
bin  data  samples

The output of find / -name is

(rgzy5) root@rgzy:/usr/src/tensorrt# find / -name

I’m closing this topic due to there is no update from you for a period, assuming this issue was resolved.
If still need the support, please open a new topic. Thanks


Could you set TensorRT_DIR to the TensorRT folder and try it again?