On giving this command make nvinfer_plugin -j$(nproc) i could see two warnings. Apart from that installation was successful, and inside build/out folder, libnvinfer_plugin.so, libnvinfer_plugin.so.7.0.0 , libnvinfer_plugin.so.7.0.0.1 these files got generated.
these are the final few lines of the log,
/workspace/TensorRT/plugin/multilevelProposeROI/multilevelProposeROIPlugin.cpp: In member function ‘virtual int nvinfer1::plugin::MultilevelProposeROI::enqueue(int, const void* const*, void**, void*, cudaStream_t)’:
/workspace/TensorRT/plugin/multilevelProposeROI/multilevelProposeROIPlugin.cpp:408:46: warning: pointer of type ‘void *’ used in arithmetic [-Wpointer-arith]
mParam, proposal_ws, workspace + kernel_workspace_offset,
^
/workspace/TensorRT/plugin/multilevelProposeROI/multilevelProposeROIPlugin.cpp:425:29: warning: pointer of type ‘void *’ used in arithmetic [-Wpointer-arith]
workspace + kernel_workspace_offset,
^
[ 60%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/detectionForward.cu.o
[ 60%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/extractFgScores.cu.o
[ 60%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/gatherTopDetections.cu.o
[ 66%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/generateAnchors.cu.o
[ 66%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/gridAnchorLayer.cu.o
[ 66%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/maskRCNNKernels.cu.o
[ 73%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/nmsLayer.cu.o
[ 73%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/normalizeLayer.cu.o
[ 73%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/permuteData.cu.o
[ 73%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/priorBoxLayer.cu.o
[ 80%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/proposalKernel.cu.o
[ 80%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/proposalsForward.cu.o
[ 80%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/regionForward.cu.o
[ 86%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/reorgForward.cu.o
[ 86%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/roiPooling.cu.o
[ 86%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/rproiInferenceFused.cu.o
/workspace/TensorRT/plugin/common/kernels/proposalKernel.cu(34): warning: variable "ALIGNMENT" was declared but never referenced
[ 93%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/cudaDriverWrapper.cu.o
[ 93%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/sortScoresPerImage.cu.o
[ 93%] Building CUDA object plugin/CMakeFiles/nvinfer_plugin.dir/common/kernels/sortScoresPerClass.cu.o
[ 93%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/InferPlugin.cpp.o
[100%] Building CXX object plugin/CMakeFiles/nvinfer_plugin.dir/__/samples/common/logger.cpp.o
[100%] Linking CUDA device code CMakeFiles/nvinfer_plugin.dir/cmake_device_link.o
[100%] Linking CXX shared library ../out/libnvinfer_plugin.so
[100%] Built target nvinfer_plugin
After this succesfully copied libnvinfer_plugin.so.7.0.0.1 file inside build/out folder to, usr/lib/x86_64-linux-gnu/ folder in the name libnvinfer_plugin.so.7.0.0
and still sudo ldconfig is not returning any output and while inferring, getPluginCreator could not find plugin BatchTilePlugin_TRT version 1 error is there
So this is the issue,
I am not receiving anything like the third line in the log you shared, something like /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.7.0.0 → libnvinfer_plugin.so.7.0.0* is not coming for me…
One extra question: Is it a must for you to run inference in the tlt container?
Actually after you trained an etlt model, you can copy this etlt model to Jetson devices(such as Nano, NX, Xavier, TX2, etc), then run tlt-converter jetpack version to generate trt engine.
Then build the TRT OSS in the Jetson device and replace the plugin. Then run inference in the Jetson device.
I was trying to run it in my system. Outside docker i tried to install tensorrt, while importing there are some issues which is yet to solve. But inside the container, tensorrt is already installed and i could infer detectnet_v2 made trt engine, successfully with my python code… But for infering trt engine made with yolo, these plugins need to be installed… there I am facing these issues…
Yes. True… And in near future i will be running this in Jetson, but currently yes… I am stuck inferring yolo trt engine inside tlt docker, with a standalone python script…