Inferring detectnet_v2 .trt model in python

Reference:

https://docs.nvidia.com/deeplearning/tensorrt/support-matrix/

Your card 's compute capability is 7.5 . Please set GPU_ARCHS to 75.

Thanks @Morganh… I did the full steps you mentioned to for Installing TRT OSS to the base docker. But still the same error remains. After installing TRT OSS, it should get those required plugins right? But still “could not find plugin BatchTilePlugin_TRT version 1” error is there…

[TensorRT] ERROR: INVALID_ARGUMENT: getPluginCreator could not find plugin BatchTilePlugin_TRT version 1
[TensorRT] ERROR: safeDeserializationUtils.cpp (293) - Serialization Error in load: 0 (Cannot deserialize plugin since corresponding IPluginCreator not found in Plugin Registry)
[TensorRT] ERROR: INVALID_STATE: std::exception
[TensorRT] ERROR: INVALID_CONFIG: Deserialize the cuda engine failed.
Traceback (most recent call last):
  File "main.py", line 36, in <module>
    context = trt_engine.create_execution_context()
AttributeError: 'NoneType' object has no attribute 'create_execution_context'
root@605c1edbf876:/workspace/tlt-experiments/box_jeston#

Have you replaced the plugin “libnvinfer_plugin.so*”?
See more in https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps/tree/master/TRT-OSS/Jetson

Yes i did these steps,

cp out/libnvinfer_plugin.so.7.0.0.1 /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.7.0.0 && \

cp out/libnvinfer_plugin_static.a /usr/lib/x86_64-linux-gnu/libnvinfer_plugin_static.a && \

cd ../../../ && \

rm -rf trt_oss_src

Can you share below result?
$ ll -sh /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so*

Please take a look at my comment shared in below topic.

this is the output for ll -sh /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so*,

 0 lrwxrwxrwx 1 root root   26 Dec 17  2019 /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so -> libnvinfer_plugin.so.7.0.0
   0 lrwxrwxrwx 1 root root   26 Dec 17  2019 /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.7 -> libnvinfer_plugin.so.7.0.0
4.1M -rw-r--r-- 1 root root 4.1M Jan 13 16:47 /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.7.0.0

Please check my comment in above link.
Please see below.
Please follow the exact step shared in the github.

Original:

$ ll -sh /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so*

0 lrwxrwxrwx 1 root root 26 Apr 26 16:41 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so -> libnvinfer_plugin.so.7.1.0
0 lrwxrwxrwx 1 root root 26 Apr 26 16:41 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7 -> libnvinfer_plugin.so.7.1.0
4.5M -rw-r–r-- 1 root root 4.5M Apr 26 16:38 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.1.0

If

$ sudo mv /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.1.0 ~/libnvinfer_plugin.so.7.1.0.bak

$ sudo cp {TRT_SOURCE}/build/out/libnvinfer_plugin.so.7.0.0.1 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.1.0

$ sudo ldconfig

then

nvidia@nvidia:~$ ll /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so*
lrwxrwxrwx 1 root root 26 Apr 27 08:53 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so -> libnvinfer_plugin.so.7.1.0*
lrwxrwxrwx 1 root root 26 Apr 27 08:53 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7 -> libnvinfer_plugin.so.7.1.0*
lrwxrwxrwx 1 root root 26 Apr 27 08:55 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.0.0 -> libnvinfer_plugin.so.7.1.0*
-rwxr-xr-x 1 root root 4652648 Apr 27 08:55 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.1.0*

That’s the expected now.

Thanks… I followed the steps given in the github repo… Installation was successful. But for sudo ldconfig it’s not returning any logs.

On giving this command make nvinfer_plugin -j$(nproc) i could see two warnings. Apart from that installation was successful, and inside build/out folder, libnvinfer_plugin.so, libnvinfer_plugin.so.7.0.0 , libnvinfer_plugin.so.7.0.0.1 these files got generated.

these are the final few lines of the log,

/workspace/TensorRT/plugin/multilevelProposeROI/multilevelProposeROIPlugin.cpp: In member function ‘virtual int nvinfer1::plugin::MultilevelProposeROI::enqueue(int, const void* const*, void**, void*, cudaStream_t)’:
/workspace/TensorRT/plugin/multilevelProposeROI/multilevelProposeROIPlugin.cpp:408:46: warning: pointer of type ‘void *’ used in arithmetic [-Wpointer-arith]
             mParam, proposal_ws, workspace + kernel_workspace_offset,
                                              ^
/workspace/TensorRT/plugin/multilevelProposeROI/multilevelProposeROIPlugin.cpp:425:29: warning: pointer of type ‘void *’ used in arithmetic [-Wpointer-arith]
                 workspace + kernel_workspace_offset,
                             ^
[ 60%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/detectionForward.cu.o
[ 60%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/extractFgScores.cu.o
[ 60%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/gatherTopDetections.cu.o
[ 66%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/generateAnchors.cu.o
[ 66%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/gridAnchorLayer.cu.o
[ 66%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/maskRCNNKernels.cu.o
[ 73%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/nmsLayer.cu.o
[ 73%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/normalizeLayer.cu.o
[ 73%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/permuteData.cu.o
[ 73%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/priorBoxLayer.cu.o
[ 80%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/proposalKernel.cu.o
[ 80%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/proposalsForward.cu.o
[ 80%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/regionForward.cu.o
[ 86%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/reorgForward.cu.o
[ 86%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/roiPooling.cu.o
[ 86%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/rproiInferenceFused.cu.o
/workspace/TensorRT/plugin/common/kernels/proposalKernel.cu(34): warning: variable "ALIGNMENT" was declared but never referenced

[ 93%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/cudaDriverWrapper.cu.o
[ 93%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/sortScoresPerImage.cu.o
[ 93%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/sortScoresPerClass.cu.o
[ 93%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/InferPlugin.cpp.o
[100%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/__/samples/common/logger.cpp.o
[100%] Linking CUDA device code CMakeFiles/nvinfer_plugin.dir/cmake_device_link.o
[100%] Linking CXX shared library ../out/libnvinfer_plugin.so
[100%] Built target nvinfer_plugin

After this succesfully copied libnvinfer_plugin.so.7.0.0.1 file inside build/out folder to, usr/lib/x86_64-linux-gnu/ folder in the name libnvinfer_plugin.so.7.0.0

and still sudo ldconfig is not returning any output and while inferring, getPluginCreator could not find plugin BatchTilePlugin_TRT version 1 error is there

Yes, there is no log. That’s expected.
Please check again now with
$ ls -sh /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so*

0 /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so 0 /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.7 4.7M /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.7.0.0

this is the output now…
while inferring g etPluginCreator could not find plugin BatchTilePlugin_TRT version 1 error is still there

As mentioned above, please make sure you can get similar link and soft link under /usr/lib/x86_64-linux-gnu/

nvidia@nvidia:~$ ll /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so*
lrwxrwxrwx 1 root root 26 Apr 27 08:53 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so -> libnvinfer_plugin.so.7.1.0*
lrwxrwxrwx 1 root root 26 Apr 27 08:53 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7 -> libnvinfer_plugin.so.7.1.0*
lrwxrwxrwx 1 root root 26 Apr 27 08:55 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.0.0 -> libnvinfer_plugin.so.7.1.0*
-rwxr-xr-x 1 root root 4652648 Apr 27 08:55 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.1.0*

Yes,

root@f9f07db124db:/# ll /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so*
lrwxrwxrwx 1 root root      26 Dec 17  2019 /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so -> libnvinfer_plugin.so.7.0.0*
lrwxrwxrwx 1 root root      26 Dec 17  2019 /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.7 -> libnvinfer_plugin.so.7.0.0*
-rwxr-xr-x 1 root root 4918136 Jan 18 06:42 /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.7.0.0*

I am afraid it is not expected.
Can you double check below step mentioned above?

Please see below.

Original:

$ ll -sh /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so*

0 lrwxrwxrwx 1 root root 26 Apr 26 16:41 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so -> libnvinfer_plugin.so.7.1.0
0 lrwxrwxrwx 1 root root 26 Apr 26 16:41 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7 -> libnvinfer_plugin.so.7.1.0
4.5M -rw-r–r-- 1 root root 4.5M Apr 26 16:38 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.1.0

If

$ sudo mv /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.1.0 ~/libnvinfer_plugin.so.7.1.0.bak

$ sudo cp {TRT_SOURCE}/build/out/libnvinfer_plugin.so.7.0.0.1 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.1.0

$ sudo ldconfig

then

nvidia@nvidia:~$ ll /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so*
lrwxrwxrwx 1 root root 26 Apr 27 08:53 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so -> libnvinfer_plugin.so.7.1.0*
lrwxrwxrwx 1 root root 26 Apr 27 08:53 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7 -> libnvinfer_plugin.so.7.1.0*
lrwxrwxrwx 1 root root 26 Apr 27 08:55 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.0.0 -> libnvinfer_plugin.so.7.1.0*
-rwxr-xr-x 1 root root 4652648 Apr 27 08:55 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.1.0*

Initially,

root@4c0b40b94840:/workspace/TensorRT/build# ll -sh /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so*
      0 lrwxrwxrwx 1 root root  26 Dec 17  2019 /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so -> libnvinfer_plugin.so.7.0.0
      0 lrwxrwxrwx 1 root root  26 Dec 17  2019 /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.7 -> libnvinfer_plugin.so.7.0.0
    15M -rw-r--r-- 1 root root 15M Aug  3 16:18 /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.7.0.0

then ,
sudo mv /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.7.0.0 ~/libnvinfer_plugin.so.7.0.0.bak
sudo cp pwd/out/libnvinfer_plugin.so.7.0.0.1 /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.7.0.0
sudo ldconfig

after that,

root@4c0b40b94840:/workspace/TensorRT/build# ll -sh /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so*
   0 lrwxrwxrwx 1 root root   26 Dec 17  2019 /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so -> libnvinfer_plugin.so.7.0.0*
   0 lrwxrwxrwx 1 root root   26 Dec 17  2019 /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.7 -> libnvinfer_plugin.so.7.0.0*
4.7M -rwxr-xr-x 1 root root 4.7M Jan 18 07:36 /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.7.0.0*

So this is the issue,
I am not receiving anything like the third line in the log you shared, something like /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.7.0.0 -> libnvinfer_plugin.so.7.0.0* is not coming for me…

I will try to run your steps on my side. Please correct me if any.

  1. Trigger 2.0_py3 tlt docker
  2. Build TRT OSS
  3. replace the plugin

before bulding trt oss, cmake was installed… wget https://github.com/Kitware/CMake/releases/download/v3.13.5/cmake-3.13.5.tar.gz

One extra question: Is it a must for you to run inference in the tlt container?
Actually after you trained an etlt model, you can copy this etlt model to Jetson devices(such as Nano, NX, Xavier, TX2, etc), then run tlt-converter jetpack version to generate trt engine.
Then build the TRT OSS in the Jetson device and replace the plugin. Then run inference in the Jetson device.

I was trying to run it in my system. Outside docker i tried to install tensorrt, while importing there are some issues which is yet to solve. But inside the container, tensorrt is already installed and i could infer detectnet_v2 made trt engine, successfully with my python code… But for infering trt engine made with yolo, these plugins need to be installed… there I am facing these issues…