Nvinfer based on the configuration file to save the custom model?

nvinfer1::ICudaEngine *engine = builder->buildEngineWithConfig(*network, *config);

1,I saw this code when I was learning to customize the implementation of the model interface, but after I indexed the code I found that there is only one header file for this, is the source code not currently open?
2,This code I do not understand very well, it is based on the configuration file to save the custom model?
3,This code is nvinfer encapsulates the process of tensorrt building egine? Because in tensorrt seems to be written like this: “ICudaEngine* engine = builder->buildCudaEngine(*network);”

  • deepstream-app version 6.1.0
  • DeepStreamSDK 6.1.0
  • CUDA Driver Version: 11.4
  • CUDA Runtime Version: 11.0
  • TensorRT Version: 8.2
  • cuDNN Version: 8.4
  • libNVWarp360 Version: 2.0.1d3
  • device :A6000

This is TensorRT code to create the engine, please refer to TensorRT: nvinfer1::IBuilder Class Reference
For more questions please submit topic in TensorRT forum TensorRT - NVIDIA Developer Forums

The inference process of tensorrt has been implemented in deepstream’s nvinfer plugin? We just need to input an Engine model to nvinfer?

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one.
Thanks

No. It is not open source.

The interface is to build engine file from the model.

Yes.

Yes.

Any of the following types of models:

  • Caffe Model and Caffe Prototxt
  • ONNX
  • UFF file
  • TAO Encoded Model and Key
  • Model engine

https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvinfer.html

There are samples of gst-nvinfer configurations in the SDK. C/C++ Sample Apps Source Details — DeepStream 6.1.1 Release documentation

Please read the document and samples carefully.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.