I have reimplemented the DeepStream Gst-nvinfer plugin and I am trying to build an engine from an etlt file. The engine is built and saved to disk successfully, but I fail to deserialize it. However, I am able to deserialize engines built by Gst-nvinfer.
The parameters I set in NvDsInferContextInitParams are the following:
I am doing that and the single point of failure I can find right now is the parameters passed in NvDsInferContextInitParams. However, since the config parsing and NvDsInferContextInitParams build process is so convoluted I am not sure if I’m missing something. The source code is almost unreadable.
Since the NvDsInferCudaEngineGetFromTltModel method is opaque I need the list of parameters needed by it. The NvDsInferContextInitParams struct has a plethora of fields which are irrelevant for building the model.
Compare them how? This is not python, so I can not just call print(). Even if I print each field, not all of them are used by that function. Therefore, comparing the contents is meaningless.
Should I understand that no-one at Nvidia knows how that plugin works anymore? If so, please make NvDsInferCudaEngineGetFromTltModel open source and I’ll come back with an answer myself.
Opaque APIs need a good documentation. Telling people to look in the source code to figure out how to use it is not acceptable.
Yes, you are completely right. It does show how to use the API over thousands of lines of code that seem to be written by an intern who has just discovered OOP.
There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks