Pytorch mobilenetv2 to tensorrt on Xavier(jetpack4.2,tensorrt6..0.1.10)

When i convert pytorch mobilenetv2 to tensorrt on Xavier, it has this error, but no error on PC
, can you gays help me?

(Unnamed Layer* 151) [Constant]
Building an engine from file mbv2New.onnx; this may take a while…
[TensorRT] ERROR: Network must have at least one output
Completed creating Engine
Traceback (most recent call last):
File “torch_onnx_tensorrt.py”, line 244, in
‘mbv2New.trt’)
File “torch_onnx_tensorrt.py”, line 94, in get_engine_fp32
f.write(engine.serialize())
AttributeError: ‘NoneType’ object has no attribute ‘serialize’

Hi,

We try to check this issue but meet some error when downloading the mbv2New.onnx file.
Could you help to validate the link for us first?

Thanks.

Thank you!
here is my mobilenetv2.py and mobilenetv2.onnxmbv2.zip (2.6 MB)

origin pytorch modelmbv2_pth.zip (2.6 MB)

@AastaLLL

Hi,

Sorry for keeping you waiting.
We are still checking this issue.

Will update more information once we got a progress.
Thanks.

Thank you very much!

Do you get some progress?

Hi,

Sorry for the late update.

The error occurs on the parser.parse(model.read()).
Since the parsing is failed, the engine is None and leads to the error you reported.

We test your model on JetPack4.4 production release environment.
And it can work correctly with the following update in torch_onnx_tensorrt.py:

diff --git a/torch_onnx_tensorrt.py b/torch_onnx_tensorrt.py
index c334091..8ec4c0a 100755
--- a/torch_onnx_tensorrt.py
+++ b/torch_onnx_tensorrt.py
@@ -1,10 +1,11 @@
-import torch
+#import torch
 import time
 import sys
 import tensorrt as trt
 import numpy as np
 import pycuda.driver as cuda
-from mobilenetv2 import MobileNetV2
+#from mobilenetv2 import MobileNetV2
+EXPLICIT_BATCH = 1 << (int)(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH)
 
 def saveONNX(model, filepath):
     model1 = MobileNetV2(22, 0.5)
@@ -20,14 +21,18 @@ def saveONNX(model, filepath):
 def get_engine_fp32(onnx_file_path, engine_file_path):
     G_LOGGER = trt.Logger(trt.Logger.WARNING)
 
-    with trt.Builder(G_LOGGER) as builder, builder.create_network() as network, trt.OnnxParser(network, G_LOGGER) as parser:
+    with trt.Builder(G_LOGGER) as builder, builder.create_network(EXPLICIT_BATCH) as network, trt.OnnxParser(network, G_LOGGER) as parser:
         builder.max_batch_size = 1
         builder.max_workspace_size = 1 << 20  #1024
 
         print('Loading ONNX file from path {}...'.format(onnx_file_path))
         with open(onnx_file_path, 'rb') as model:
             print('Beginning ONNX file parsing')
-            parser.parse(model.read())
+            if not parser.parse(model.read()):
+                print ('ERROR: Failed to parse the ONNX file.')
+                for error in range(parser.num_errors):
+                    print (parser.get_error(error))
+                return None
 
         #last_layer = network.get_layer(network.num_layers - 1)
         #network.mark_output(last_layer.get_output(0))
@@ -47,7 +52,7 @@ def get_engine_fp32(onnx_file_path, engine_file_path):
         return engine
 
 if __name__ == '__main__':
-    saveONNX('mbv2.pth', "mbv2.onnx")
+#   saveONNX('mbv2.pth', "mbv2.onnx")
     get_engine_fp32('mbv2.onnx',
                  'mbv2.trt')

Thanks.