Nvinfer has the process of extracting weights?

I understand that when using nvinfer you need to build your own so library based on the network layer of the api, but after building the network layer, you also need to extract the weights.
nvinfer is there a process to extract the weights?

  • deepstream-app version 6.1.0
  • DeepStreamSDK 6.1.0
  • CUDA Driver Version: 11.4
  • CUDA Runtime Version: 11.0
  • TensorRT Version: 8.2
  • cuDNN Version: 8.4
  • libNVWarp360 Version: 2.0.1d3

Can you elaborate what is the “so library” you are referring? And which layer is the “weights” that you want to extract?
Please be noted that nvinfer is the plugin to do inference, the output is either meta data that includes the model’s output or raw tensor data, the weights of the model is invisible at the output of the model.

I checked the documentation and learned that ‘nvdsinfer_custom_impl.h’ seems to be what I’m looking for, but we don’t quite understand what it does and where the .c code for its implementation is?

Maybe you can refer the sources\objectDetector_FasterRCNN.

Well, I studied this that you posted.
I would like to know if there is a call relationship diagram about nvinfer, similar to UML, so that I can guide the implementation principle of nvinfer more clearly!

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one.

There is no call relationship diagram about nvinfer now. But you can refer our Guide https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvinfer.html.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.