Inferring detectnet_v2 .trt model in python

On giving this command make nvinfer_plugin -j$(nproc) i could see two warnings. Apart from that installation was successful, and inside build/out folder, libnvinfer_plugin.so, libnvinfer_plugin.so.7.0.0 , libnvinfer_plugin.so.7.0.0.1 these files got generated.

these are the final few lines of the log,

/workspace/TensorRT/plugin/multilevelProposeROI/multilevelProposeROIPlugin.cpp: In member function ‘virtual int nvinfer1::plugin::MultilevelProposeROI::enqueue(int, const void* const*, void**, void*, cudaStream_t)’:
/workspace/TensorRT/plugin/multilevelProposeROI/multilevelProposeROIPlugin.cpp:408:46: warning: pointer of type ‘void *’ used in arithmetic [-Wpointer-arith]
             mParam, proposal_ws, workspace + kernel_workspace_offset,
                                              ^
/workspace/TensorRT/plugin/multilevelProposeROI/multilevelProposeROIPlugin.cpp:425:29: warning: pointer of type ‘void *’ used in arithmetic [-Wpointer-arith]
                 workspace + kernel_workspace_offset,
                             ^
[ 60%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/detectionForward.cu.o
[ 60%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/extractFgScores.cu.o
[ 60%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/gatherTopDetections.cu.o
[ 66%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/generateAnchors.cu.o
[ 66%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/gridAnchorLayer.cu.o
[ 66%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/maskRCNNKernels.cu.o
[ 73%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/nmsLayer.cu.o
[ 73%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/normalizeLayer.cu.o
[ 73%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/permuteData.cu.o
[ 73%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/priorBoxLayer.cu.o
[ 80%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/proposalKernel.cu.o
[ 80%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/proposalsForward.cu.o
[ 80%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/regionForward.cu.o
[ 86%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/reorgForward.cu.o
[ 86%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/roiPooling.cu.o
[ 86%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/rproiInferenceFused.cu.o
/workspace/TensorRT/plugin/common/kernels/proposalKernel.cu(34): warning: variable "ALIGNMENT" was declared but never referenced

[ 93%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/cudaDriverWrapper.cu.o
[ 93%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/sortScoresPerImage.cu.o
[ 93%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/sortScoresPerClass.cu.o
[ 93%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/InferPlugin.cpp.o
[100%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/__/samples/common/logger.cpp.o
[100%] Linking CUDA device code CMakeFiles/nvinfer_plugin.dir/cmake_device_link.o
[100%] Linking CXX shared library ../out/libnvinfer_plugin.so
[100%] Built target nvinfer_plugin

After this succesfully copied libnvinfer_plugin.so.7.0.0.1 file inside build/out folder to, usr/lib/x86_64-linux-gnu/ folder in the name libnvinfer_plugin.so.7.0.0

and still sudo ldconfig is not returning any output and while inferring, getPluginCreator could not find plugin BatchTilePlugin_TRT version 1 error is there

Yes, there is no log. That’s expected.
Please check again now with
$ ls -sh /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so*

0 /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so 0 /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.7 4.7M /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.7.0.0

this is the output now…
while inferring g etPluginCreator could not find plugin BatchTilePlugin_TRT version 1 error is still there

As mentioned above, please make sure you can get similar link and soft link under /usr/lib/x86_64-linux-gnu/

nvidia@nvidia:~$ ll /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so*
lrwxrwxrwx 1 root root 26 Apr 27 08:53 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so → libnvinfer_plugin.so.7.1.0*
lrwxrwxrwx 1 root root 26 Apr 27 08:53 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7 → libnvinfer_plugin.so.7.1.0*
lrwxrwxrwx 1 root root 26 Apr 27 08:55 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.0.0 → libnvinfer_plugin.so.7.1.0*
-rwxr-xr-x 1 root root 4652648 Apr 27 08:55 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.1.0*

Yes,

root@f9f07db124db:/# ll /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so*
lrwxrwxrwx 1 root root      26 Dec 17  2019 /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so -> libnvinfer_plugin.so.7.0.0*
lrwxrwxrwx 1 root root      26 Dec 17  2019 /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.7 -> libnvinfer_plugin.so.7.0.0*
-rwxr-xr-x 1 root root 4918136 Jan 18 06:42 /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.7.0.0*

I am afraid it is not expected.
Can you double check below step mentioned above?

Please see below.

Original:

$ ll -sh /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so*

0 lrwxrwxrwx 1 root root 26 Apr 26 16:41 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so → libnvinfer_plugin.so.7.1.0
0 lrwxrwxrwx 1 root root 26 Apr 26 16:41 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7 → libnvinfer_plugin.so.7.1.0
4.5M -rw-r–r-- 1 root root 4.5M Apr 26 16:38 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.1.0

If

$ sudo mv /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.1.0 ~/libnvinfer_plugin.so.7.1.0.bak

$ sudo cp {TRT_SOURCE}/build/out/libnvinfer_plugin.so.7.0.0.1 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.1.0

$ sudo ldconfig

then

nvidia@nvidia:~$ ll /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so*
lrwxrwxrwx 1 root root 26 Apr 27 08:53 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so → libnvinfer_plugin.so.7.1.0*
lrwxrwxrwx 1 root root 26 Apr 27 08:53 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7 → libnvinfer_plugin.so.7.1.0*
lrwxrwxrwx 1 root root 26 Apr 27 08:55 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.0.0 → libnvinfer_plugin.so.7.1.0*
-rwxr-xr-x 1 root root 4652648 Apr 27 08:55 /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.1.0*

Initially,

root@4c0b40b94840:/workspace/TensorRT/build# ll -sh /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so*
      0 lrwxrwxrwx 1 root root  26 Dec 17  2019 /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so -> libnvinfer_plugin.so.7.0.0
      0 lrwxrwxrwx 1 root root  26 Dec 17  2019 /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.7 -> libnvinfer_plugin.so.7.0.0
    15M -rw-r--r-- 1 root root 15M Aug  3 16:18 /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.7.0.0

then ,
sudo mv /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.7.0.0 ~/libnvinfer_plugin.so.7.0.0.bak
sudo cp pwd/out/libnvinfer_plugin.so.7.0.0.1 /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.7.0.0
sudo ldconfig

after that,

root@4c0b40b94840:/workspace/TensorRT/build# ll -sh /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so*
   0 lrwxrwxrwx 1 root root   26 Dec 17  2019 /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so -> libnvinfer_plugin.so.7.0.0*
   0 lrwxrwxrwx 1 root root   26 Dec 17  2019 /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.7 -> libnvinfer_plugin.so.7.0.0*
4.7M -rwxr-xr-x 1 root root 4.7M Jan 18 07:36 /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.7.0.0*

So this is the issue,
I am not receiving anything like the third line in the log you shared, something like /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.7.0.0 → libnvinfer_plugin.so.7.0.0* is not coming for me…

I will try to run your steps on my side. Please correct me if any.

  1. Trigger 2.0_py3 tlt docker
  2. Build TRT OSS
  3. replace the plugin

before bulding trt oss, cmake was installed… wget https://github.com/Kitware/CMake/releases/download/v3.13.5/cmake-3.13.5.tar.gz

One extra question: Is it a must for you to run inference in the tlt container?
Actually after you trained an etlt model, you can copy this etlt model to Jetson devices(such as Nano, NX, Xavier, TX2, etc), then run tlt-converter jetpack version to generate trt engine.
Then build the TRT OSS in the Jetson device and replace the plugin. Then run inference in the Jetson device.

I was trying to run it in my system. Outside docker i tried to install tensorrt, while importing there are some issues which is yet to solve. But inside the container, tensorrt is already installed and i could infer detectnet_v2 made trt engine, successfully with my python code… But for infering trt engine made with yolo, these plugins need to be installed… there I am facing these issues…

Got it. So your request is that how to run inference successfully in tlt docker with the existing yolo trt engine.

Yes. True… And in near future i will be running this in Jetson, but currently yes… I am stuck inferring yolo trt engine inside tlt docker, with a standalone python script…

Thanks for the info.

I really appreciate the help you are doing…

Have you run tlt-infer successfully against the same yolo trt engine?

Yes I did…

Can you share your standalone inference script here?

trt_loader.py (9.0 KB) main.py (2.2 KB)

main.py is the one i run… it will load trt_loader… works fine for detectnet_v2

Same methods as you suggested…