Description
I am trying to install tao converter to convert a trained .elt model into a tensorrt model. I am currently trying to convert this model on a Xavier Jetson AGX but a problem appears during installation of TensorRT OSS.
Environment
Jetpack Version:4.6
TensorRT Version: 8.0.1.6
GPU Type: Xavier Jetson-AGX
Nvidia Driver Version:
CUDA Version: 10.2
Operating System + Version: Ubuntu 18.04 LTS L4T
Steps To Reproduce
Before installing tao-converter I need to install Tensorrt-OSS (source: TAO Converter | NVIDIA NGC).
To do so, I followed: https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps/tree/master/TRT-OSS/Jetson.
I upgraded cmake to 3.19.4 (cmake --version confirmed it). I then ran the following command for installing TensorRT OSS plugin:
git clone -b release8.0//https://github.com/nvidia/TensorRT for TensorRT 8.X
cd TensorRT/
git submodule update --init --recursive
export TRT_SOURCE=`pwd`
cd $TRT_SOURCE
mkdir -p build && cd build
$HOME/install/bin/cmake .. -DGPU_ARCHS=72 -DTRT_LIB_DIR=/usr/lib/aarch64-linux-gnu/ -DCMAKE_C_COMPILER=/usr/bin/gcc -DTRT_BIN_DIR=`pwd`/out
The problem appears when I run the next command:
make nvinfer_plugin -j$(nproc)
It gives me: fatal error: cub/cub.cuh: No such file or directory
include “cub/cub.cuh”
[make_traceback.txt|attachment]
Full traceback here:
make_traceback.txt (30.0 KB)
I also tried to add cub library in include_directories part of CMakeLists.txt but it didn’t help. If anyone has an idea it would be really appreciated, thanks.