Assertion Error in ScopedSlotSetup when buildEngineWithConfig() is called

Description

An onnx was loaded and works well with Tensorrt 7.0.0 in window. When I try to used the same onnx with Tensorrt 6.0.1 in Ubuntu, an error was asserted when the function buildEngineWithConfig() was called.
Following is the reported information.

...
UNKNOWN: (Unnamed Layer* 468) [Convolution] (scudnn) Set Tactic Name: maxwell_scudnn_128x64_relu_interior_nn_v1
UNKNOWN: (Unnamed Layer* 471) [Convolution] (scudnn) Set Tactic Name: maxwell_scudnn_128x64_relu_interior_nn_v1
UNKNOWN: (Unnamed Layer* 518) [Convolution] (scudnn) Set Tactic Name: maxwell_scudnn_128x64_relu_small_nn_v1
UNKNOWN: (Unnamed Layer* 521) [Convolution] (scudnn) Set Tactic Name: maxwell_scudnn_128x64_relu_interior_nn_v1
UNKNOWN: (Unnamed Layer* 530) [Convolution] (scudnn) Set Tactic Name: maxwell_scudnn_128x128_relu_interior_nn_v1
UNKNOWN: (Unnamed Layer* 690) [Convolution] + (Unnamed Layer* 693) [ElementWise] (scudnn) Set Tactic Name: maxwell_scudnn_128x64_relu_interior_nn_v1
UNKNOWN: (Unnamed Layer* 689) [Convolution] + (Unnamed Layer* 695) [ElementWise] (scudnn) Set Tactic Name: maxwell_scudnn_128x64_relu_interior_nn_v1
UNKNOWN: (Unnamed Layer* 702) [Convolution] (scudnn) Set Tactic Name: maxwell_scudnn_128x64_relu_small_nn_v1
UNKNOWN: (Unnamed Layer* 796) [Convolution] (scudnn) Set Tactic Name: maxwell_scudnn_128x64_relu_small_nn_v1
UNKNOWN: (Unnamed Layer* 706) [Convolution] (scudnn) Set Tactic Name: maxwell_scudnn_128x64_relu_small_nn_v1
UNKNOWN: (Unnamed Layer* 708) [Convolution] (scudnn) Set Tactic Name: maxwell_scudnn_128x128_relu_medium_nn_v1
UNKNOWN: (Unnamed Layer* 800) [Convolution] (scudnn) Set Tactic Name: maxwell_scudnn_128x64_relu_interior_nn_v1
INTERNAL_ERROR: Assertion failed: profile != nullptr && "need profile"
../builder/cudnnBuilder2.cpp:2204
Aborting...

ERROR: ../builder/cudnnBuilder2.cpp (2204) - Assertion Error in ScopedSlotSetup: 0 (profile != nullptr && "need profile")
Segmentation fault (core dumped)

Environment

TensorRT Version: 6.0.1
GPU Type: 1080Ti
Nvidia Driver Version:
CUDA Version: 10.2
CUDNN Version:
Operating System + Version: Ubuntu 16.04
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

nanodet.onnx (7.8 MB)

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:
https://docs.nvidia.com/deeplearning/tensorrt/quick-start-guide/index.html#onnx-export

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

Hi @vic_cwq,

Looks like you’re using old TensorRT version, many issues have been fixed in latest TensorRT version. We recommend you to please use latest TensorRT version.

Thank you.