About the incompitvle arguments type in tensorrt python API,where creating a network from scratch

Details on the platforms you are using:
Ubuntu 16.04 LTS
GPU type:1050Ti
nvidia driver version:390.87
CUDA version:9.0
CUDNN version:7.13
Python version:3.5
TensorRT version: 5.0

In the official sample code,/sample/python/network_api_pytorch_mnist,line 33:

conv1 = network.add_convolution(input=input_tensor,num_output_maps=20, kernel_shape=(5, 5),kernel=conv1_w,bias=conv1_b)

But when I built my own network using my own model parameters,error occur:

File “debug.py”, line 31, in
conv1=network.add_convolution(input=input_tensor, num_output_maps=32, kernel_shape=(3,3), kernel=conv1_w)
TypeError: add_convolution(): incompatible function arguments. The following argument types are supported:

  1. (self: tensorrt.tensorrt.INetworkDefinition, input: tensorrt.tensorrt.ITensor, num_output_maps: int, kernel_shape: tensorrt.tensorrt.DimsHW, kernel: tensorrt.tensorrt.Weights, bias: tensorrt.tensorrt.Weights) -> tensorrt.tensorrt.IConvolutionLayer

Invoked with: <tensorrt.tensorrt.INetworkDefinition object at 0x7f1c17173bc8>; kwargs: input=<tensorrt.tensorrt.ITensor object at 0x7f1c17173c38>, kernel_shape=(3, 3), kernel=tensor(ommiting the data here for short), num_output_maps=32

I also tried numpy array for kernel and tried using tensorrt.tensorrt.Weights to wrap the conv1_w.Neither solution worked.
I have two question:
1.What did I lose in the code or what can I do for solve this problem? I have tried using tensorrt.DimsHW to wrap the tuple and it didn’t work
2.Is the model training process must in the network creating code?

network_api_pytorch.zip (29 KB)


can you share a repro package containing how you are building your own network using kernel_shape? including the source that exhibit the error you are seeing?

I have upload my code.More detail in README.md
Looking for your reply!
Thank you.

Could you give some help or advises?I am really confused.After trying a lot of work, I still solve this problem.
network_api_pytorch.zip (29 KB)


I’ve locally reproduced the error you are seeing. We are triaging now. It’ll take a few days for updates.

NVIDIA Enterprise Support


Two issues going on:

  1. .add_convolution() bias argument is not declared . Use bias=trt.Weights() for default.
  2. You’re passing in a PyTorch tensor for the kernel weights. You would need to convert that to a NumPy array. You can try kernel=test_conv1_w.numpy()
test_conv1 =  network.add_convolution(input=input_tensor, num_output_maps=32, kernel_shape=(3, 3), kernel=test_conv1_w.numpy() , bias=trt.Weights())

I made these changes to debug_2.py, and executed successfully.

NVIDIA Enterprise Support

I met the same question and by adding the bias=trt.Weights() it works.
But in the official documents it says bias is OPTIONAL so I omit it in the beginning.

Hope you can update the documents make sure they are consistent with the codes.