Missing cub/cub.h when installing TensorRT OSS

Description

I am trying to install tao converter to convert a trained .elt model into a tensorrt model. I am currently trying to convert this model on a Xavier Jetson AGX but a problem appears during installation of TensorRT OSS.

Environment

Jetpack Version:4.6
TensorRT Version: 8.0.1.6
GPU Type: Xavier Jetson-AGX
Nvidia Driver Version:
CUDA Version: 10.2
Operating System + Version: Ubuntu 18.04 LTS L4T

Steps To Reproduce

Before installing tao-converter I need to install Tensorrt-OSS (source: TAO Converter | NVIDIA NGC).
To do so, I followed: https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps/tree/master/TRT-OSS/Jetson.
I upgraded cmake to 3.19.4 (cmake --version confirmed it). I then ran the following command for installing TensorRT OSS plugin:

git clone -b release8.0//https://github.com/nvidia/TensorRT       for  TensorRT 8.X
cd TensorRT/
git submodule update --init --recursive
export TRT_SOURCE=`pwd`
cd $TRT_SOURCE
mkdir -p build && cd build
$HOME/install/bin/cmake .. -DGPU_ARCHS=72  -DTRT_LIB_DIR=/usr/lib/aarch64-linux-gnu/ -DCMAKE_C_COMPILER=/usr/bin/gcc -DTRT_BIN_DIR=`pwd`/out

The problem appears when I run the next command:

make nvinfer_plugin -j$(nproc)

It gives me: fatal error: cub/cub.cuh: No such file or directory
include “cub/cub.cuh”
[make_traceback.txt|attachment]

Full traceback here:
make_traceback.txt (30.0 KB)

I also tried to add cub library in include_directories part of CMakeLists.txt but it didn’t help. If anyone has an idea it would be really appreciated, thanks.

Hi,
Please refer to below links related custom plugin implementation and sample:

While IPluginV2 and IPluginV2Ext interfaces are still supported for backward compatibility with TensorRT 5.1 and 6.0.x respectively, however, we recommend that you write new plugins or refactor existing ones to target the IPluginV2DynamicExt or IPluginV2IOExt interfaces instead.

Thanks!

Thank you for the quick answer NVES!

I checked your link but I currently don’t have an .onnx model file right now, just .elt.

To reformulate the problem, I have a .elt model file that I obtained by training it with TAO Toolkit. However .elt model files can only be used in TAO or DeepStream, so to use it in my own pipeline I need to convert it to a useable deployment format as TensorRT, and that’s what I’m trying to do through tao_convert.

PS: I don’t have DeepStream installed on my system yet, I’ll install it and post the new here.

I installed DeepStream 6.0 but still the same problem.

Hi,

We recommend you to please reach out to Issues · NVIDIA/TensorRT · GitHub to get better help on issues related to TensorRT OSS.

Thank you.