Does nvinfer work with custom neural Network?

Our neural network gives output as heatmap from an input image. Can nvinfer work with this custom neural network? Or nvinfer only for object detection , classification and segmentation ?

Sure. Not limited in detection/classification/segmentation, custom neural network is OK. You need to deploy the network by tensorrt firstly, make clear pre-process, tensorrt iplugins needed, post-process, and then deploy it by deepstream. Deepstream has ample pre-process which you can configure and custom post-process.

Can you be more specific about how to do the clear pre-process, tensorrt iplugins, post-process?

https://docs.nvidia.com/metropolis/deepstream/plugin-manual/index.html
→ GStreamer Plugin Details → Gst-nvinfer
→ IPlugin Interface

Is there any sample example for iplugin in deepstream folder? If not can you provide any?

deepstream-4.0/sources/objectDetector_FasterRCNN
deepstream-4.0/sources/objectDetector_SSD

I gave deepstream a custom uff model( not detector/classifier/segmentation) and it successfully converted it to TensorRT. Do I need to write IPlugin? Or will writing just a custom parser suffice to manipulate the inference output?

Yes. Don’t need to write IPlugin if you can get tesorRT engine. Just need to write output parser to replace

parse-bbox-func-name=NvDsInferParseCustomResnet
custom-lib-path=/path/to/libnvdsparsebbox.so
and update other network property in config file

So how to calculate the net-scale-factor, how can i get the output pixel value? it’s the feature map 's width * height?

Hi BlgPeng_XX,

Please help to open a new topic for your issue. Thanks