Unable to generate nvdsinfer_customparser_frcnn_uff.so from nvdsinfer_customparser_frcnn_uff liberar...

I am running tlt resnet50 with deepstream i have converted the resnet50 h5 file to .etlt file as mentioned in Integrating TAO Models into DeepStream — TAO Toolkit 3.22.05 documentation.
The pgie_frcnn_uff_config.txt file is used as

gpu-id=0
net-scale-factor=1.0
offsets=103.939;116.779;123.68
model-color-format=1
labelfile-path=labels.txt
# Provide the .etlt model exported by TLT or a TensorRT engine created by tlt-converter
# If use .etlt model, please also specify the key('nvidia_tlt')
# model-engine-file=./rcnn.engine
tlt-encoded-model=frcnn_kitti.etlt
tlt-model-key=cmswbDk2OHFwcWgwZzAzdWw2ZzVkZjFlbWs6N2ZkMjFhMGItZmVhMS00NzRmLTk2YTQtOTU5NmUwNDAzMDlk
uff-input-dims=3;384;1280;0
uff-input-blob-name=input_1
batch-size=1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
num-detected-classes=5
interval=0
gie-unique-id=1
is-classifier=0
#network-type=0
output-blob-names=dense_regress/BiasAdd;dense_class/Softmax;proposal
parse-bbox-func-name=NvDsInferParseCustomFrcnnUff
custom-lib-path=libnvds_infercustomparser_frcnn_uff.so

[class-attrs-all]
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0

Now i need the libnvds_infercustomparser_frcnn_uff.so for this which i am trying to create from deepstream_4.x_apps/nvdsinfer_customparser_frcnn_uff at master · NVIDIA-AI-IOT/deepstream_4.x_apps · GitHub but once i make it gives error
—> fatal error: nvdsinfer_custom_impl.h: No such file or directory
#include “nvdsinfer_custom_impl.h”
^~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.

This file in not in the github repo even it i get it from NVIDIA DeepStream SDK API Reference: nvdsinfer_custom_impl.h Source File it still gives error.

Hi muhammad,
To run a FasterRCNN model in DeepStream, you need a label file and a DeepStream configure file. In addition, you need to compile FasterRCNN DeepStream plugin and sample app.
See “Integrating a FasterRCNN model” part of tlt doc Integrating TAO Models into DeepStream — TAO Toolkit 3.22.05 documentation .

You need not to change libnvds_infercustomparser_frcnn_uff.so.

You just need to compile a new libnvinfer_plugin.so.5.x.x and replace the original one(i.e, /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.5.x.x)
Refer to (see https://devtalk.nvidia.com/default/topic/1066456/transfer-learning-toolkit/deepstream-inference-on-tx1-using-faster-rcnn-resnet-18-trained-using-tlt/post/5402162/#5402162 )

For faster_rcnn, you can also refer to https://devtalk.nvidia.com/default/topic/1063940/transfer-learning-toolkit/transfert-learning-toolkit-gt-export-model-/post/5388876/#5388876