Details on the platforms you are using:
Ubuntu 16.04 LTS
GPU type:1050Ti
nvidia driver version:390.87
CUDA version:9.0
CUDNN version:7.13
Python version:3.5
TensorRT version: 5.0
In the official sample code,/sample/python/network_api_pytorch_mnist,line 33:
conv1 = network.add_convolution(input=input_tensor,num_output_maps=20, kernel_shape=(5, 5),kernel=conv1_w,bias=conv1_b)
But when I built my own network using my own model parameters,error occur:
File “debug.py”, line 31, in
conv1=network.add_convolution(input=input_tensor, num_output_maps=32, kernel_shape=(3,3), kernel=conv1_w)
TypeError: add_convolution(): incompatible function arguments. The following argument types are supported:
- (self: tensorrt.tensorrt.INetworkDefinition, input: tensorrt.tensorrt.ITensor, num_output_maps: int, kernel_shape: tensorrt.tensorrt.DimsHW, kernel: tensorrt.tensorrt.Weights, bias: tensorrt.tensorrt.Weights) → tensorrt.tensorrt.IConvolutionLayer
Invoked with: <tensorrt.tensorrt.INetworkDefinition object at 0x7f1c17173bc8>; kwargs: input=<tensorrt.tensorrt.ITensor object at 0x7f1c17173c38>, kernel_shape=(3, 3), kernel=tensor(ommiting the data here for short), num_output_maps=32
I also tried numpy array for kernel and tried using tensorrt.tensorrt.Weights to wrap the conv1_w.Neither solution worked.
I have two question:
1.What did I lose in the code or what can I do for solve this problem? I have tried using tensorrt.DimsHW to wrap the tuple and it didn’t work
2.Is the model training process must in the network creating code?
network_api_pytorch.zip (29 KB)