While migrating PyTorch model to TensorRT we are getting error with model input tensor values. Please find the sample code which we are using for migration and suggest the values we have to provide for torch.randn() function. codes.py (5.8 KB)
Error
RuntimeError: Given input size: (128x1x1). Calculated output size: (128x0x0). Output size is too small
Thank you, Its workig now. The next step was to convert to ONNX using
torch_out = model_pose(x)
torch.onnx.export(torch_out, x, “onnx_pose.onnx”)
Here im getting an Attribute Error
File “D:\py_progs\Testing\convertPyTorch-ONNX\venv\lib\site-packages\torch\onnx\utils.py”, line 38, in select_model_mode_for_export
is_originally_training = model.training
AttributeError: ‘tuple’ object has no attribute ‘training’
Can you please direct me on fixing this error?
Hi @bgiddwani
Could you please explain us on how you have arrived matrix dimension of 4,3,28,28 as mentioned in code you had shared(response.py) in above mentioned comments.
“cell_type”: “code”,
“execution_count”: 8,
“metadata”: {},
“outputs”: ,
“source”: [
“# TODO Adding the right size of input values here\n”,
“x = torch.randn(batch_size, 3, 28, 28).cuda()\n”,
“torch_out = model_pose(x)”
Dim0 = Batch Size (Choose based on your GPU memory )
Dim1= No. of Input Channels which is always 3 in case of an RGB image.
Dim2 and Dim 3 = HxW . Here you should take the dimension on which your model is trained.
For example if model is trained on Imagenet data then preferred would be to use be 224x224 size.
If it was COCO dataset: Dimension might change to 640x640. Similarly for MNIST 28x28
Here I didn’t knew about the dimension so I used random safe dimension, but I would suggest to check the original shape for accurate results.