I’m tring to use torchscript in C++.
A python script loading a torchscript works well, which runs in ubuntu on docker desktop.
But, module.forward(…) of libtorch in C++ program loading the torchscript crushes, which runs on windows.
The libtorch is extructed from libtorch-win-shared-with-deps-debug-1.10.1+cu113.zip.
A command nvidia-smi shows “CUDA Version: 11.5” and print(torch.version) shows 1.11.0a0+b6df043.
Could anyone please show me solution or hint?
python
fn_t = "q_cnn_t.pt"
load_model = torch.jit.load(fn_t).to('cuda')
input = torch.rand((1, 1, 224, 224), dtype=torch.float32).to(device)
output = load_model(input)
C++
torch::DeviceType aDeviceType = torch::kCUDA;
torch::Device aTorchDevice = torch::Device(aDeviceType);
const char* s_pfn = "D:\\mnt\\docker\\sample_mnist\\q_cnn_t.pt";
torch::jit::script::Module module;
try
{
module = torch::jit::load(s_pfn);
module.to(aTorchDevice);
}
catch (const c10::Error& e)
{
std::cerr << e.msg();
return -1;
}
at::Tensor input = torch::ones({ 1, 1, 224, 224 });
at::Tensor output = module.forward({ input }).toTensor(); // An exception occures at this line