How to make plugin of tensorrt_demos on jetpack4.6?

hi,
before i used tensorrt_demos project to change onnx model to int8 model file,
tensorrt version is 7.1.3 and jetpack is 4.5.1.
yesterday i update new jetpack to 4.6 version and Tensorrt is 8.0.
in tensorrt_demos plugins’ make file:

These are the directories where I installed TensorRT on my x86_64 PC.

TENSORRT_INCS=-I"/usr/local/TensorRT-7.1.3.4/include"
TENSORRT_LIBS=-L"/usr/local/TensorRT-7.1.3.4/lib"

at jetpack 4.6 version, i didn’t fine TensorRT in /usr/local/,
so how can i do now? how to make plugins?

thank you very much!

Hi,

You can find the TensorRT include patch in /usr/include/aarch64-linux-gnu/ and library path in usr/lib/aarch64-linux-gnu/.

$ ll /usr/lib/aarch64-linux-gnu/libnvinfer*
lrwxrwxrwx 1 root root        26 Jun 25 20:17 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so -> libnvinfer_plugin.so.8.0.1
lrwxrwxrwx 1 root root        26 Jun 25 20:17 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.8 -> libnvinfer_plugin.so.8.0.1
-rw-r--r-- 1 root root  18280576 Jun 25 20:17 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.8.0.1
-rw-r--r-- 1 root root  21018654 Jun 25 20:17 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin_static.a
lrwxrwxrwx 1 root root        19 Jun 25 20:17 /usr/lib/aarch64-linux-gnu/libnvinfer.so -> libnvinfer.so.8.0.1
lrwxrwxrwx 1 root root        19 Jun 25 20:17 /usr/lib/aarch64-linux-gnu/libnvinfer.so.8 -> libnvinfer.so.8.0.1
-rw-r--r-- 1 root root 188463744 Jun 25 20:17 /usr/lib/aarch64-linux-gnu/libnvinfer.so.8.0.1
-rw-r--r-- 1 root root 333591900 Jun 25 20:17 /usr/lib/aarch64-linux-gnu/libnvinfer_static.a
$ ll /usr/include/aarch64-linux-gnu/NvInfer*
-rw-r--r-- 1 root root 268726 Jun 25 20:17 /usr/include/aarch64-linux-gnu/NvInfer.h
-rw-r--r-- 1 root root  42497 Jun 25 20:17 /usr/include/aarch64-linux-gnu/NvInferImpl.h
-rw-r--r-- 1 root root   6176 Jun 25 20:17 /usr/include/aarch64-linux-gnu/NvInferLegacyDims.h
-rw-r--r-- 1 root root  11327 Jun 25 20:17 /usr/include/aarch64-linux-gnu/NvInferPlugin.h
-rw-r--r-- 1 root root  11819 Jun 25 20:17 /usr/include/aarch64-linux-gnu/NvInferPluginUtils.h
-rw-r--r-- 1 root root  60163 Jun 25 20:17 /usr/include/aarch64-linux-gnu/NvInferRuntimeCommon.h
-rw-r--r-- 1 root root  94791 Jun 25 20:17 /usr/include/aarch64-linux-gnu/NvInferRuntime.h
-rw-r--r-- 1 root root   3148 Jun 25 20:17 /usr/include/aarch64-linux-gnu/NvInferVersion.h

Thanks.

thank you very much!

i make successfully as your instruction, but it shows: - virtual function override intended?

can it work correctly?

thank you!

Hi,

This is a harmless warning. It can run correctly.

Thanks.

yes it runs but speed is a little slower.

i setup mvpmodel as -m 2 because in documents:
Quickstart Guide — DeepStream 6.1.1 Release documentation, it suggests:
For Jetson Xavier NX, run sudo nvpmodel -m 2 instead of 0.

but int8 model is a little slower than before, i compared.

thank you.

Hi,

Would you mind opening a new topic for the performance issue?
Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.