Problem trying to Build Plugins

Hi

i am trying to deploy a TLT trained Faster rcnn into the deepstream in a x86 machine using the Option 1 which involved the exported etlt file.

i understood that i will need to generate the cropAndResizePlugin and ProposalPlugin.

Q1: But i am unsure how to proceed doing that. i noted my MakeFile.txt has the following turn on, but i can’t find what has been built as the Plugins directory (e.g. ProposalPlugin) contains only of the CMakeList, proposalplugin.cpp, proposalplugin.h even when after i had performed the cmake when generating the libnvinfer_plugin.so successfully.

option(BUILD_PLUGINS “Build TensorRT plugin” ON)
option(BUILD_PARSERS “Build TensorRT parsers” ON)
option(BUILD_SAMPLES “Build TensorRT samples” ON)

Q2: to generate libnvds_infercustomparser_tlt.so, am required to do a make in the deepstream_tlt_apps, but i faced with errors :

make -C post_processor
make[1]: Entering directory ‘/home/lab/deepstream/deepstream_tlt_apps/post_processor’
/bin/sh: 1: deepstream-app: not found
g++ -o libnvds_infercustomparser_tlt.so nvdsinfer_custombboxparser_tlt.cpp -I/opt/nvidia/deepstream/deepstream-/sources/includes -I/usr/local/cuda-11.1/include -Wall -std=c++11 -shared -fPIC -Wl,–start-group -lnvinfer -lnvparsers -L/usr/local/cuda-11.1/lib64 -lcudart -lcublas -Wl,–end-group
nvdsinfer_custombboxparser_tlt.cpp:25:10: fatal error: nvdsinfer_custom_impl.h: No such file or directory
25 | #include “nvdsinfer_custom_impl.h”
| ^~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
make[1]: *** [Makefile:49: libnvds_infercustomparser_tlt.so] Error 1

Q3: When i tried to run in the deepstream container and perform the same step, i had
/usr/bin/ld: cannot find -lcudart
/usr/bin/ld: cannot find -lcublas

Thanks

Not sure if you have checked GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream , there is guide, e.g. 1. Build TRT OSS Plugin , to run TLT faster-rcnn model.

Thanks!

Yes I have checked them out. I will recheck again. Can I check for the libnvinfer_plugin.so does it contains the rest of the plugin as well ? What is the end result I should expect to see for the proposal and cropresize plugin , are they to exist as a separate plugin generated ? i using the default for the BUILD_PLUGINS which is ON

my problem is my build is successful for the generation of the libnvinfer_plugin.so but in the build/plugin directory for Proposal and CropAndresize, i can only see 1) CMakeFiles 2) cmake_install.cmake 3)Makefile
so is this the expected results?

# ./trtexec --onnx=./trtexec --verbose
from the output, you can find the plugin list

thanks is that command for jetpack? am not using jetson as doing on x86. and i can’t find the trtexec command which i did a short check that i need to build it.

Without using the tool, is there other way to check? the instructions on the generation of the plugin does not cover the validation of the results.

May i check what is the expected output of the generated plugins? Would you able to give a printout or link to ? that will be very helpful

maybe you can grep the name of the plugin in the lib binrary to check if there is the string.

I don’t understand what you mean output. you can find the description of the plugins in https://github.com/NVIDIA/TensorRT/tree/master/plugin .

can you please provide the setup info as other topic does?

my meaning of “output” is the plugin library for the proposal plugin and cropandresize plugin.
for instance in the instructions,

https://docs.nvidia.com/metropolis/TLT/tlt-user-guide/text/object_detection/fasterrcnn.html#tensorrt-open-source-software-oss

" After building ends successfully, libnvinfer_plugin.so* will be generated under \pwd`/out/.`"

thus am wondering if similar thing will be created for the plugin for the proposalplugin and cropandresize.

The following statement refers us to go the OSS in the github, but the github does not show the steps.

" For FasterRCNN, we will need to build TensorRT Open source plugins and custom bounding box parser. The instructions are provided below in the TensorRT OSS section above and the required code can be found in this GitHub repo."

But the github shows only info that what need to be generated but does not go into detail

" Build TRT OSS Plugin

Refer to below README to update libnvinfer_plugin.so* if want to run SSD , DSSD , RetinaNet , PeopleSegNet .

TRT Plugins Requirements

  • FasterRCNN : cropAndResizePlugin, proposalPlugin
  • SSD/DSSD/RetinaNet : batchTilePlugin, nmsPlugin
  • YOLOV3/YOLOV4 : batchTilePlugin, resizeNearestPlugin, batchedNMSPlugin
  • PeopleSegNet : generateDetectionPlugin, MultilevelCropAndResize, MultilevelProposeROI

i think to provide meaningful info, would appreciate what specific information you need to know to diagnosis the issue.

When you file the ticket, you should see we reommend to provide below info.
These info provides the backgound for the discussion of issue, just like you are working on x86 instead of Jetson.

• DeepStream Version 5.0
• Hardware Platform (Jetson / GPU) Jetson Nano
• JetPack Version (valid for Jetson only) 4.3
• TensorRT Version 7.1.3
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) bug
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

As mentioned in GitHub - NVIDIA-AI-IOT/deepstream_tao_apps: Sample apps to demonstrate how to deploy models trained with TAO on DeepStream .
For Faster-RCCN, if you run a new DS version, e.g. DS5.1, it’s not needed to build TRT OSS and update libnvinfer_plugin.so*.

1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.