Running SSD-Lite from Pytorch model zoo runs into errors on TensorRT


Hey everyone I am trying to run SSD-lite from the Pytorch model zoo on T4 using TensorRT and it is running into an error, with the ONNX graph being generated. Any workaround for this? Will using Nvidia’s SSD model work instead?

The error observed:

[08/04/2022-00:11:39] [I] TensorRT version: 8.4.1
[08/04/2022-00:11:39] [I] [TRT] [MemUsageChange] Init CUDA: CPU +311, GPU +0, now: CPU 319, GPU 586 (MiB)
[08/04/2022-00:11:40] [I] [TRT] [MemUsageChange] Init builder kernel library: CPU +207, GPU +68, now: CPU 545, GPU 654 (MiB)
[08/04/2022-00:11:40] [I] Start parsing network model
[08/04/2022-00:11:40] [I] [TRT] ----------------------------------------------------------------
[08/04/2022-00:11:40] [I] [TRT] Input filename:   ssdlite320_pytorch.onnx
[08/04/2022-00:11:40] [I] [TRT] ONNX IR version:  0.0.7
[08/04/2022-00:11:40] [I] [TRT] Opset version:    13
[08/04/2022-00:11:40] [I] [TRT] Producer name:    pytorch
[08/04/2022-00:11:40] [I] [TRT] Producer version: 1.12.0
[08/04/2022-00:11:40] [I] [TRT] Domain:           
[08/04/2022-00:11:40] [I] [TRT] Model version:    0
[08/04/2022-00:11:40] [I] [TRT] Doc string:       
[08/04/2022-00:11:40] [I] [TRT] ----------------------------------------------------------------
[08/04/2022-00:11:40] [W] [TRT] onnx2trt_utils.cpp:369: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[08/04/2022-00:11:40] [W] [TRT] onnx2trt_utils.cpp:395: One or more weights outside the range of INT32 was clamped
[08/04/2022-00:11:40] [I] [TRT] No importer registered for op: NonZero. Attempting to import as plugin.
[08/04/2022-00:11:40] [I] [TRT] Searching for plugin: NonZero, plugin_version: 1, plugin_namespace: 
[08/04/2022-00:11:40] [E] [TRT] ModelImporter.cpp:773: While parsing node number 363 [NonZero -> "onnx::Transpose_2095"]:
[08/04/2022-00:11:40] [E] [TRT] ModelImporter.cpp:774: --- Begin node ---
[08/04/2022-00:11:40] [E] [TRT] ModelImporter.cpp:775: input: "onnx::NonZero_2094"
output: "onnx::Transpose_2095"
name: "NonZero_1379"
op_type: "NonZero"

[08/04/2022-00:11:40] [E] [TRT] ModelImporter.cpp:776: --- End node ---
[08/04/2022-00:11:40] [E] [TRT] ModelImporter.cpp:779: ERROR: builtin_op_importers.cpp:4890 In function importFallbackPluginImporter:
[8] Assertion failed: creator && "Plugin not found, are the plugin name, version, and namespace correct?"
[08/04/2022-00:11:40] [E] Failed to parse onnx file
[08/04/2022-00:11:40] [I] Finish parsing network model
[08/04/2022-00:11:40] [E] Parsing model failed
[08/04/2022-00:11:40] [E] Failed to create engine from model or file.
[08/04/2022-00:11:40] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec [TensorRT v8401] # trtexec --onnx=ssdlite320_pytorch.onnx --saveEngine=ssdlite_engine_pytorch.trt --explicitBatch --inputIOFormats=fp16:chw --outputIOFormats=fp16:chw


TensorRT Version: 8.4.1:
GPU Type: T4:
Nvidia Driver Version: 515.43.04:
CUDA Version: 11.7:
CUDNN Version: 8.4.1:
Operating System + Version: Ubuntu 20.04:
Python Version (if applicable): 3.8:
PyTorch Version (if applicable): 1.12.0+cu102:

Steps To Reproduce

  1. Running the following code snippet in the above environment
import torch
import os
import torch.onnx
import torchvision.models as models
import numpy as np
import tensorrt as trt
import pycuda.driver as cuda
import pycuda.autoinit

ssdlite = models.detection.ssdlite320_mobilenet_v3_large(pretrained=True).eval()

dummy_input=torch.randn(BATCH_SIZE,3, 320, 320 )
torch.onnx.export(ssdlite, dummy_input, "ssdlite320_pytorch.onnx", verbose=False)

os.system('onnxsim ssdlite320_pytorch.onnx ssdlite320_pytorch.onnx --no-large-tensor')

os.system("trtexec --onnx=ssdlite320_pytorch.onnx --saveEngine=ssdlite_engine_pytorch.trt  --explicitBatch --inputIOFormats=fp16:chw --outputIOFormats=fp16:chw")

Please provide pointers on fixing this.


“NonZero” operator is not supported by TensorRT. You may need to implement custom plugin. For your reference,

Please refer to below links related custom plugin implementation and sample:

While IPluginV2 and IPluginV2Ext interfaces are still supported for backward compatibility with TensorRT 5.1 and 6.0.x respectively, however, we recommend that you write new plugins or refactor existing ones to target the IPluginV2DynamicExt or IPluginV2IOExt interfaces instead.