Pytorch to TensorRT migration model

Description

While migrating PyTorch model to TensorRT we are getting error with model input tensor values. Please find the sample code which we are using for migration and suggest the values we have to provide for torch.randn() function.
codes.py (5.8 KB)

Error

RuntimeError: Given input size: (128x1x1). Calculated output size: (128x0x0). Output size is too small

Steps To Reproduce

  • Run codes.py

Hi @riyan.dcosta ,

It seems there is an issue with your input dimension (batch, 3, 3, 3) i.e. it is quite small to pass through complete network.

I have tested your code with larger input dimensions and it is working seamlessly fine(attached). Here is the opensource link for better idea: python - Given input size: (128x1x1). Calculated output size: (128x0x0). Output size is too small - Stack Overflow

Note:- Kindly use .cuda() as I am using while conversion to onnx and then to tensorrt.
response.ipynb (36.3 KB)

Suggestion is to use (batch_size, 3, H, W) as dimension where H and W should be big enough to pass through your network.

Thank you, Its workig now. The next step was to convert to ONNX using
torch_out = model_pose(x)
torch.onnx.export(torch_out, x, “onnx_pose.onnx”)
Here im getting an Attribute Error
File “D:\py_progs\Testing\convertPyTorch-ONNX\venv\lib\site-packages\torch\onnx\utils.py”, line 38, in select_model_mode_for_export
is_originally_training = model.training
AttributeError: ‘tuple’ object has no attribute ‘training’
Can you please direct me on fixing this error?

Hi @karunakar.r ,

Here there is a mistake in line number 2 i.e torch.onnx.export(torch_out, x, “onnx_pose.onnx”).

Replace “torch_out” with “model_pose” and for more information and extra flags for optimized onnx model refer link:

https://pytorch.org/tutorials/advanced/super_resolution_with_onnxruntime.html

1 Like

Hi @bgiddwani
Could you please explain us on how you have arrived matrix dimension of 4,3,28,28 as mentioned in code you had shared(response.py) in above mentioned comments.

“cell_type”: “code”,
“execution_count”: 8,
“metadata”: {},
“outputs”: ,
“source”: [
“# TODO Adding the right size of input values here\n”,
“x = torch.randn(batch_size, 3, 28, 28).cuda()\n”,
“torch_out = model_pose(x)”

@karunakar.r

Dim0 = Batch Size (Choose based on your GPU memory )
Dim1= No. of Input Channels which is always 3 in case of an RGB image.
Dim2 and Dim 3 = HxW . Here you should take the dimension on which your model is trained.

For example if model is trained on Imagenet data then preferred would be to use be 224x224 size.
If it was COCO dataset: Dimension might change to 640x640. Similarly for MNIST 28x28

Here I didn’t knew about the dimension so I used random safe dimension, but I would suggest to check the original shape for accurate results.

1 Like

Some examples for YOLO:

Different Models trained with different dimensions: pytorch-YOLOv4/cfg at master · Tianxiaomo/pytorch-YOLOv4 · GitHub

Checkout cfg files and the preprocessing step.

1 Like

Hi @bgiddwani,
Thanks for the response. We are using TensorRT model, which was migrated using

GitHub - ZheC/Realtime_Multi-Person_Pose_Estimation: Code repo for realtime multi-person pose estimation in CVPR'17 (Oral), which was migrated using 1,3,368,368 dimensions

Preprocessing, we have is
img_test = cv2.resize(img_raw, (0, 0), fx=scale, fy=scale, interpolation=cv2.INTER_CUBIC)
img_test_pad, pad = pad_right_down_corner(img_test, param_stride, param_stride)
img_test_pad = np.transpose(np.float32(img_test_pad[:, :, :, np.newaxis]), (3, 2, 0, 1)) / 256 - 0.5

    feed = Variable(torch.from_numpy(img_test_pad)).cuda()
    output1, output2 = model(feed)

Hi @karunakar.r,

Are you still facing this issue.

Thank you.

Hi @spolisetty We tried migrating the model using below GH, that also didn’t work.

All we want is, the model should work with multiple dimensions of the image.