Convert Hourglass to TensorRT

Description

For a while, I’m trying to convert the Hourglass model of CenterNet to TensorRT through onnx format. Below, I attached a part of the script that I used to convert the Hourglass model to onnx format:

def hourglass_forward(self, x):
      inter = self.pre(x)
      ret = []
      for ind in range(self.nstack):
           kp, cnv_ = self.kps[ind], self.cnvs[ind]
           kp = kp_(inter)
           cnv = cnv_(kp)
          
          out = []
          for head in self.heads:
                layer = self.getattr(head)[ind]
                y = layer(cnv)
              
          ret.append(y)
          if ind < self.nstack - 1:
          inter = self.inters_[ind](inter) + self.cnvs_[ind](cnv)
          inter = self.relu(inter)
          inter = self.inters[ind](inter)
      return ret
.
.
.
opt = opts().init()
opt.arch = 'hourglass_104'
opt.heads = OrderedDict([('hm', 80), ('reg', 2), ('wh', 2)])
opt.head_conv = 256 if 'hourglass' in opt.arch else 64
print(opt)
model = create_model(opt.arch, opt.heads, opt.head_conv)
model.forward = MethodType(forward[opt.arch.split('')[0]], model)
load_model(model, 'ctdet_coco_hg.pth')
model.eval()
model.cuda()
input = torch.zeros([1, 3, 512, 512]).cuda()
onnx.export(model, input, "ctdet_coco_hg.onnx", verbose=True,operator_export_type=OperatorExportTypes.ONNX)

It seems that there is some mistakes in my script but the conversion to onnx format done. However, when I tried to apply the first step of TensorRT to generate the engine, it shows these errors:
WARNING: ONNX model has a newer ir_version (0.0.4) than this parser was built against (0.0.3).
WARNING: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
Successfully casted down to INT32.
While parsing node number 208 [Gather]:
3
ERROR: …/onnx2trt_utils.hpp:335 In function convert_axis:
[8] Assertion failed: axis >= 0 && axis < nbDims
ERROR: failed to parse onnx file

There is any suggestion about these errors?

Environment

TensorRT Version: 6.0.1.5
GPU Type: RTX 6000
Nvidia Driver Version: 455.32
CUDA Version: 10.1
CUDNN Version: 7.5.0
Operating System + Version: Ubunu 18.04
Python Version (if applicable): Python 3.6.0
TensorFlow Version (if applicable): 2.0.0
PyTorch Version (if applicable): 1.2.0

1 Like

Hi, Request you to share the ONNX model and the script so that we can assist you better.

Alongside you can try validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).

Alternatively, you can try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec

Thanks!

I shared the script and the onnx file in below:
https://drive.google.com/file/d/1nLa0b1S-0gsCgrHd5RwEzjDTrOB8X2J1/view?usp=sharing
conv_onnx_hg.py (1.6 KB)

Hi @haythem-AI,
Please provide access to the model.

Thanks!

Hi @AakankshaS,

Yes, I made it open access

Thank you

Hi @haythem-AI ,
Looks like the issue is with your onnx model.
please check your model using below link’s reference.

You can raise the ONNX related queries in the respective forum.

Thanks!

Thanks!

Hi @AakankshaS ,

But I already tried to check the model from the first reply of @NVES and the checker give me this msg:

Blockquote [libprotobuf WARNING google/protobuf/io/coded_stream.cc:604] Reading dangerously large protocol message. If the message turns out to be larger than 2147483647 bytes, parsing will be halted for security reasons. To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:81] The total number of bytes read was 765786980
The model is checked!

Thanks!!