build_cuda_engine fails silintly if Reshape operation is in ONNX

TensorRT version
pytorch version 1.1.0

ONNX model print:

graph(%data : Float(120, 320, 672, 3)):
  %1 : Tensor = onnx::Constant[value=   4   90  320  672 [ Variable[CPUType]{4} ]](), scope: DummyNet
  %prob : Float(4, 90, 320, 672) = onnx::Reshape(%data, %1), scope: DummyNet
  return (%prob)

Some code I used:

class DummyNet(nn.Module):
    def __init__(self):
        super(DummyNet, self).__init__()

    def forward(self, x):
        #x = x * (1 / 255)
        x = x.view([4, 90, 320, 672])    <<<  Reason of TRT fail !!!
        return x

model = DummyNet()
data = torch.rand(120, 320, 672, 3)

input_names = ["data"] + ["learned_%d" % i for i in range(len(list(model.parameters())))]
output_names = ["prob"]
torch.onnx.export(model, data, "model.onnx", verbose=1,
                  input_names=input_names, output_names=output_names)

builder = trt.Builder(TRT_LOGGER)
network = builder.create_network()
parser = trt.OnnxParser(network, TRT_LOGGER)
builder.max_batch_size = batch_size

onnx_model_name = open(onnx_model_name, 'rb')

engine = builder.build_cuda_engine(network)
assert engine != None <<< Assertion here

builder.build_cuda_engine silently returns None iv reshape is in ONNX
Also builder returns None if

x = x * (1/255)

I also added ONNX file model_opt.onnx.avi (thank you forum policy)


In this case it’s reshaping the batch dimension - that is not yet supported.
For eg: if input it (N,C,H,W), reshape can be (N,,,) or (N,,*) where total non batch dim matches in both cases.

I will also recommend to use the latest TRT version.


I must state that better option to convert pytorch models to TRT is using libraries like torch2trt which uses TRT API, why did not you mentioned it?


Yes, you can use torch2trt to convert your pytorch model to TRT.
Please find the below link for more details:

Please note, this converter has limited coverage of TensorRT / PyTorch. We created it primarily to easily optimize the models used in the JetBot project.

Hence I haven’t recommended it earlier, but if it works in your case you can use this approach as well.