Torchscript crushes in a program linking libtorch

I’m tring to use torchscript in C++.

A python script loading a torchscript works well, which runs in ubuntu on docker desktop.

But, module.forward(…) of libtorch in C++ program loading the torchscript crushes, which runs on windows.

The libtorch is extructed from libtorch-win-shared-with-deps-debug-1.10.1+cu113.zip.

A command nvidia-smi shows “CUDA Version: 11.5” and print(torch.version) shows 1.11.0a0+b6df043.

Could anyone please show me solution or hint?


python

fn_t = "q_cnn_t.pt"
load_model = torch.jit.load(fn_t).to('cuda')

input = torch.rand((1, 1, 224, 224), dtype=torch.float32).to(device)
output = load_model(input)

C++

torch::DeviceType aDeviceType = torch::kCUDA;
torch::Device aTorchDevice = torch::Device(aDeviceType);
const char* s_pfn = "D:\\mnt\\docker\\sample_mnist\\q_cnn_t.pt";
torch::jit::script::Module module;
try
{
    module = torch::jit::load(s_pfn);
    module.to(aTorchDevice);
}
catch (const c10::Error& e)
{
    std::cerr << e.msg();

    return -1;
}

at::Tensor input = torch::ones({ 1, 1, 224, 224 });

at::Tensor output = module.forward({ input }).toTensor(); // An exception occures at this line

Hi,
Please refer to the below link for Sample guide.

Refer to the installation steps from the link if in case you are missing on anything

However suggested approach is to use TRT NGC containers to avoid any system dependency related issues.

In order to run python sample, make sure TRT python packages are installed while using NGC container.
/opt/tensorrt/python/python_setup.sh

In case, if you are trying to run custom model, please share your model and script with us, so that we can assist you better.
Thanks!

Thank you.
I see that the TensorRT is inference engine.

And, I think the TensorRT seem to be different from “libtorch,” is it correct?

Hi,

This issue doesn’t look like TensorRT related. We recommend you to please post your concern on related platform to get better help.

Thank you.

Oh, thank you.

BTW, are you an AI?