Is cudnn necessary when TensorRT is used to run inference?

Description

It’s a little weird to ask this question but I am confused.

I use command ldd to check libnvinfer.so.8, but there is no link to cudnn but libnvinfer_plugin.so.8 has one. It seems ok to run inference only linking to libnvinfer.so.

And I also check libnvinfer.so.7 in TensorRT-7.0.0.11, it needs cudnn.

So

  1. What‘s the differences between libnvinfer.so.8 and libnvinfer_plugin.so.8?
  2. Is that ok to run inference only linking libnvinfer.so.8 but no cudnn offered ?

ldd libnvinfer.so.8

    linux-vdso.so.1 (0x00007ffc04544000)
    librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007fb413687000)
    libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fb413681000)
    libstdc++.so.6 => /lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007fb41349f000)
    libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007fb413350000)
    libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007fb413335000)
    libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fb413141000)
    /lib64/ld-linux-x86-64.so.2 (0x00007fb42d9ee000)
    libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fb41311e000)

Environment

TensorRT Version: TensorRT-8.4.0.6

Hi,
Can you try running your model with trtexec command, and share the “”–verbose"" log in case if the issue persist

You can refer below link for all the supported operators list, in case any operator is not supported you need to create a custom plugin to support that operation

Also, request you to share your model and script if not shared already so that we can help you better.

Meanwhile, for some common errors and queries please refer to below link:

Thanks!

I am sorry but there is no error issues. It’s the question about dynamic linking TensorRT in project that make me confused.

Hi,

Sorry for the delayed response.
libnvinfer.so.8 is the actual TensorRT runtime. libnvinfer_plugin.so.8 is a collection of TensorRT of open-source plugins that are available here, GitHub - NVIDIA/TensorRT: TensorRT is a C++ library for high performance inference on NVIDIA GPUs and deep learning accelerators. .

Those plugins are only required to implement some special ONNX operations and are not necessary for every TRT engine.

It is possible to forbid CUDNN tactics (e.g. using this flag of trtexec ).

  --tacticSources=tactics     Specify the tactics to be used by adding (+) or removing (-) tactics from the default 
                              tactic sources (default = all available tactics).
                              Note: Currently only cuDNN, cuBLAS and cuBLAS-LT are listed as optional tactics.
                              Tactic Sources: tactics ::= [","tactic]
                                              tactic  ::= (+|-)lib
                                              lib     ::= "CUBLAS"|"CUBLAS_LT"|"CUDNN"

Thank you.