Issues with the loading model from memory

Please provide the following info:
Hardware Platform: Pegasus developer kit
Software Version: Drive Software 10
Host Machine Version: Ubuntu 18.04
SDK Manager Version:

There appears to be an API dwDNN_initializeTensorRTFromMemoryNew that is supposed to load model from memory. Unfortunately its failing with:

driveworks_camera_inference_node: engine.cpp:1104: bool nvinfer1::rt::Engine::deserialize(const void*, std::size_t, nvinfer1::IGpuAllocator&, nvinfer1::IPluginFactory*): Assertion size >= bsize && “Mismatch between allocated memory size and expected size of serialized engine.”’ failed.`

Has anyone come across this issue before? The sample code seem to be using dwDNN_initializeTensorRTFromFileNew but there seem to be no sample code for the API that I am using. I am able to load from a file using the FileNew api and If I were to open the file and dump contents into memory and try to use the MemoryNew API the network fails to load. Am I doing this correctly? Does the API expect the model_content to be the model file loaded into memory or something else?

Hi @tudor626a1,
Please share your code snippet for our checking. Thanks!

False alarm. I found a bug in my code and was able to get the api to work properly. Thanks for the speedy response.

1 Like

Grad to hear your clarified your issue!