Convert SuperPoint project to TensorRT project at AGX Xavier

Description

I tried to use the SuperPoint project on AGX Xavier and converted PyTorch model (.pth) to TensorRT model (.trt) using python. When I loaded TensorRT model (.trt) using c++, it failed.

Save the model code (PyTorch -->TensorRT in Python3.6)

    model_trt = torch2trt(self.net, [inp])
    torch.save(model_trt.state_dict(), 'superpoint_trt.trt')
    print('trt model export successed!')

Load model code (Load TensorRT in C ++)

 ICudaEngine* loadEngine(const std::string& engine, int DLACore, std::ostream& err)
{
std::ifstream engineFile(engine, std::ios::binary);
    if (!engineFile)
    {
        err << "Error opening engine file: " << engine << std::endl;
        return nullptr;
    }
    engineFile.seekg(0, engineFile.end);
    long int fsize = engineFile.tellg();
    engineFile.seekg(0, engineFile.beg);
    std::vector<char> engineData(fsize);
    engineFile.read(engineData.data(), fsize);
    if (!engineFile)
    {
        err << "Error loading engine file: " << engine << std::endl;
        return nullptr;
    }

    sample::TrtUniquePtr<IRuntime> runtime{createInferRuntime(sample::gLogger.getTRTLogger())};
    if (DLACore != -1)
    {
        runtime->setDLACore(DLACore);
    }

    return runtime->deserializeCudaEngine(engineData.data(), fsize, nullptr);
}


int main() {
    std::cout << "Hello, World!" << std::endl;
    loadEngine("/home/runner/catkin_ws/deps/superpoint/superpoint_trt.trt",-1,sample::gLogError);
    return 0;
}

error log:

[05/19/2021-18:17:02] [E] [TRT] coreReadArchive.cpp (31) - Serialization Error in verifyHeader: 0 (Magic tag does not match)
[05/19/2021-18:17:02] [E] [TRT] INVALID_STATE: std::exception
[05/19/2021-18:17:02] [E] [TRT] INVALID_CONFIG: Deserialize the cuda engine failed.

Environment

  • Up Time: 2 days 8:24:23 Version: 3.1.0
  • Jetpack: 4.5.1 [L4T 32.5.1] Author: Raffaello Bonghi
  • Board: e-mail: raffaello@rnext.it
    • Type: AGX Xavier [16GB]
    • SOC Family: tegra194 ID: 25
    • Module: P2888-0001 Board: P2822-0000
    • Code Name: galen
    • Cuda ARCH: 7.2
    • Serial Number: 1422220028095
  • Libraries: - Hostname: runner-agx
    • CUDA: 10.2.89 - Interfaces:
    • OpenCV: 4.2.0 compiled CUDA: YES * docker0: 172.17.0.1
    • TensorRT: 7.1.3.0 * eth0: 192.168.80.138
    • VPI: ii libnvvpi1 1.0.15 arm64 NVIDIA Vision Programming Interface
      li* VisionWorks: 1.6.0.501
    • Vulkan: 1.2.70
    • cuDNN: 8.0.0.180

Steps To Reproduce

Superpoint: GitHub - magicleap/SuperPointPretrainedNetwork: PyTorch pre-trained model for real-time interest point detection, description, and sparse tracking (https://arxiv.org/abs/1712.07629)
torch2trt: GitHub - NVIDIA-AI-IOT/torch2trt: An easy to use PyTorch to TensorRT converter

Hi,

Serialization Error in verifyHeader: 0 (Magic tag does not match)

Above error indicates that different TensorRT version is used for serialization and deserialization.

Since TensorRT doesn’t support portability.
Please convert the PyTorch model into TensorRT engine directly on the AGX Xavier.

Thanks.

@AastaLLL Thanks

Both serialization and deserialization are done on the same machine and on the same TensorRT version, the only difference being that the serialization is done in Python and the deserialization is done in C ++.

By the way,how does TensorRT load the PyTorch model (PTH file) ? (Using the C ++ API)

You need to save the serialized TRT engine and not the PyTorch model (pth):

with open('engine.trt', 'wb') as stream:
    stream.write(model_trt.engine.serialize())

@mosshammer.a Thank you. I’ll try.