save a engine with pytorch model in python, load engin in c++?

std::ifstream cache(“pyt_mnist.engine”, std::ios::in | std::ios::binary);

std::streampos begin, end;
begin = cache.tellg();
cache.seekg(0, std::ios::end);
end = cache.tellg();

std::size_t size = end - begin;
cache.seekg(0, std::ios::beg);

std::unique_ptr<unsigned char[]> engine_data(new unsigned char);

cache.read((char*)engine_data.get(), size);
cache.close();

// create runtime inference engine execution context
IRuntime* infer = createInferRuntime(gLogger);
ICudaEngine* engine = infer->deserializeCudaEngine((const void*)engine_data.get(), size, nullptr);

Is it right?

Dear wangyang9113,
Could you please elaborate a bit more on what you are trying to achieve.
Unfortunately, I could not get your question.

OK, I serialize a engine with python api and save it, then I load the engine file and deserialize with C++ API, the result seems wrong. So, my question is that can C++ API load engine saved by Python API?

Dear wangyang9113,
Just for information, if you are loading engine on DrivePX2 using C++ API, the engine should be serialized on DrivePX2. If this indeed the case, Can you please file a bug via Devzone with all supporting information.

Please login to https://developer.nvidia.com/drive with your credentials. Please check MyAccount->MyBugs->Submit a new bug to file bug.

Please share the ID here to follow up.