We need a support of TensorRT to perform inference on the model created using Upsample layers (i.e., SegNet architecture.
Thanks in Advance.
Regards,
Vinay Kumar N
We need a support of TensorRT to perform inference on the model created using Upsample layers (i.e., SegNet architecture.
Thanks in Advance.
Regards,
Vinay Kumar N
Hi Vinay,
Can you share a few things below?
Your segnet ONNX model so we can better look into what needs to be supported?
More details on how you’re trying to convert the model? API, trtexec, exact code, etc.
The errors / traceback you’re getting when trying to convert the model
Details on the platforms you are using:
o Linux distro and version
o GPU type
o Nvidia driver version
o CUDA version
o CUDNN version
o Python version [if using python]
o Tensorflow and PyTorch version
o TensorRT version
Also, there’s a GitHub issue tracking some known issues when trying to convert the Upsample op from PyTorch, there might be some useful/relevant information in this comment: https://github.com/NVIDIA/TensorRT/issues/284#issuecomment-572835659
Thanks NVES_R.
Your segnet ONNX model so we can better look into what needs to be supported?
Response: We are using “caffe” model, not ONNEX model. If required, we will convert caffe to onnx.!!
More details on how you’re trying to convert the model? API, trtexec, exact code, etc.
Response: We are using caffe model parser provided nvidia.
Details on the platforms you are using:
o Linux distro and version: Windows 10 platform
o GPU type: GTX 1050Ti
o Nvidia driver version:
o CUDA version: CUDA 10
o CUDNN version: 7.6.4
o Python version [if using python]: 3.7.x
o Tensorflow : 2.0.0
o TensorRT version: 5.0.0
Hi Vinay,
ONNX has the best support at the moment for various operators and most frameworks support exporting to ONNX format. Additionally, Caffe/UFF parsers will be deprecated in the future per the TensorRT 7 release notes.
So if possible please convert the model to ONNX and then share it.
There are likely some useful links/examples here: https://github.com/onnx/tutorials#converting-to-onnx-format
Thank you NVES_R
Now, I am trying to convert the existing caffemodel to onnx model. During conversion, I am getting the following error:
File “convert2onnx.py”, line 34, in main
graph, params = loadcaffemodel(caffe_graph_path,caffe_params_path)
File “D:\VinayKumar\Programs\Task8(TRT_SegNet)\caffe-onnx-master\src\load_save_model.py”, line 9, in loadcaffemodel
text_format.Merge(open(net_path).read(), net)
File “C:\ProgramData\Anaconda3\lib\site-packages\google\protobuf\text_format.py”, line 702, in Merge
allow_unknown_field=allow_unknown_field)
File “C:\ProgramData\Anaconda3\lib\site-packages\google\protobuf\text_format.py”, line 770, in MergeLines
return parser.MergeLines(lines, message)
File “C:\ProgramData\Anaconda3\lib\site-packages\google\protobuf\text_format.py”, line 795, in MergeLines
self._ParseOrMerge(lines, message)
File “C:\ProgramData\Anaconda3\lib\site-packages\google\protobuf\text_format.py”, line 817, in _ParseOrMerge
self._MergeField(tokenizer, message)
File “C:\ProgramData\Anaconda3\lib\site-packages\google\protobuf\text_format.py”, line 942, in _MergeField
merger(tokenizer, message, field)
File “C:\ProgramData\Anaconda3\lib\site-packages\google\protobuf\text_format.py”, line 1016, in _MergeMessageField
self._MergeField(tokenizer, sub_message)
File “C:\ProgramData\Anaconda3\lib\site-packages\google\protobuf\text_format.py”, line 909, in _MergeField
(message_descriptor.full_name, name))
google.protobuf.text_format.ParseError: 57:3 : Message type “caffe.LayerParameter” has no field named “bn_param”.
May be the error is because of onnx does not support “bn_param” layer.
Please do help.
Thanks NVES_R. I have change the name of bn_param to param as per suggestion given in the above mentioned link.
But, now I am getting different error as follows:
Failed at layer conv_decode4, layer’s bottom not detected …
添加参数conv_decode4_W输入信息和tensor数据
添加参数conv_decode4_b输入信息和tensor数据
Traceback (most recent call last):
File “convert2onnx.py”, line 42, in
main(args)
File “convert2onnx.py”, line 35, in main
c2o = Caffe2Onnx(graph, params, onnx_name)
File “D:\VinayKumar\Programs\Task8(TRT_SegNet)\caffe-onnx-master\src\caffe2onnx.py”, line 28, in init
self.__getNodeList(LayerList)
File “D:\VinayKumar\Programs\Task8(TRT_SegNet)\caffe-onnx-master\src\caffe2onnx.py”, line 229, in __getNodeList
conv_node = op.createConv(Layers[i],nodename,inname,outname,input_shape)
File “D:\VinayKumar\Programs\Task8(TRT_SegNet)\caffe-onnx-master\src\OPs\Conv.py”, line 71, in createConv
output_shape = getConvOutShape(input_shape, layer, dict)
File “D:\VinayKumar\Programs\Task8(TRT_SegNet)\caffe-onnx-master\src\OPs\Conv.py”, line 49, in getConvOutShape
h = (input_shape[0][2] - kernel_shape[0] + pads[0] + pads[2] - (kernel_shape[0]-1)(dilations[0]-1))/strides[0] + 1 # 输出维度N= ((输入维度I - 卷积核维度K + 2 * 填充P - (卷积核维度-1)(膨胀系数-1))/步长S) + 1
IndexError: list index out of range
Is it possible for you to save your weights to a file, create the model in PyTorch, and load your weights there? That may be an easier path to get the ONNX model. Or perhaps there’s a better conversion tool from Caffe->ONNX, I don’t know what is most common for Caffe.