I am running tlt resnet50 with deepstream i have converted the resnet50 h5 file to .etlt file as mentioned in Integrating TAO Models into DeepStream — TAO Toolkit 3.22.05 documentation.
The pgie_frcnn_uff_config.txt file is used as
gpu-id=0
net-scale-factor=1.0
offsets=103.939;116.779;123.68
model-color-format=1
labelfile-path=labels.txt
# Provide the .etlt model exported by TLT or a TensorRT engine created by tlt-converter
# If use .etlt model, please also specify the key('nvidia_tlt')
# model-engine-file=./rcnn.engine
tlt-encoded-model=frcnn_kitti.etlt
tlt-model-key=cmswbDk2OHFwcWgwZzAzdWw2ZzVkZjFlbWs6N2ZkMjFhMGItZmVhMS00NzRmLTk2YTQtOTU5NmUwNDAzMDlk
uff-input-dims=3;384;1280;0
uff-input-blob-name=input_1
batch-size=1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
num-detected-classes=5
interval=0
gie-unique-id=1
is-classifier=0
#network-type=0
output-blob-names=dense_regress/BiasAdd;dense_class/Softmax;proposal
parse-bbox-func-name=NvDsInferParseCustomFrcnnUff
custom-lib-path=libnvds_infercustomparser_frcnn_uff.so
[class-attrs-all]
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0
Now i need the libnvds_infercustomparser_frcnn_uff.so for this which i am trying to create from deepstream_4.x_apps/nvdsinfer_customparser_frcnn_uff at master · NVIDIA-AI-IOT/deepstream_4.x_apps · GitHub but once i make it gives error
—> fatal error: nvdsinfer_custom_impl.h: No such file or directory
#include “nvdsinfer_custom_impl.h”
^~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
This file in not in the github repo even it i get it from NVIDIA DeepStream SDK API Reference: nvdsinfer_custom_impl.h Source File it still gives error.