Hello.
Because of trained at Pytorch platform, .pth weight file is transformed into .onnx model file, and .onnx model is used in SampleOnnxMnist with C++ API. However, Only ResNet can be used in torchvision models. Including DenseNet and Inceptionv3 cannot be parsered. Pretrained MobileNetv2.uff directly transformed from offical downloading files cannot be parsered either.
So I test the accuracy with ResNet50 pretrained model.With the same image(cat vs dog test_dataset 1-5.jpg)and same preprocess(div 255), the outputs in Pytorch eval and TensorRT inference are quite different. How to fix this?
I think if building layers step by step will fix model transformation problems. So how to transform .pth weight files into .wts files?
Thank you.
Here is image API with OpenCV.
void readimagefromFiles(const char *path, float data[])
{
cv::Mat img = cv::imread(path);
if (img.empty())
{
printf("Load image from files failed!");
system("pause");
}
cv::resize(img, img, cv::Size(INPUT_H, INPUT_W));
for (int h=0;h<INPUT_H;h++)
{
for (int w=0; w<INPUT_W; w++)
{
data[0 * INPUT_W*INPUT_H + h * INPUT_W + w] = static_cast<float>(img.at<cv::Vec3b>(h, w)[2]) / 255.0;
data[1 * INPUT_W*INPUT_H + h * INPUT_W + w] = static_cast<float>(img.at<cv::Vec3b>(h, w)[1]) / 255.0;
data[2 * INPUT_W*INPUT_H + h * INPUT_W + w] = static_cast<float>(img.at<cv::Vec3b>(h, w)[0]) / 255.0;
}
}
}
Tensor RT platform
RTX2080Ti
Win10(SDK 10.0.17763.0)
CUDA10.1
cuDNN7.6.1
Visual Studio 2017(v141)
Pytorch platform
Ubuntu 18.04
CUDA 10.03
cuDNN 7.5.1
Pytorch 1.10
torchvision 0.3.0 with model ResNet50, inception_v3, densenet161
tensorflow 1.13.1 with official pretrained mobilenetv2(.pb)