Getting "Segmentation Fault" when converting RetinaNet model to TensorRT Engine File

Description

I am trying to convert RetinaNet ResNet 18 model available in the README of the follwing REPO:

However, I get an error of “Segmentation Fault” when converting the pth file to engine.plan file

  1. I downloaded the Resnet 18 and Resnet 34 pth models available in the README.md of this Repo GitHub - NVIDIA/retinanet-examples: Fast and accurate object detection with end-to-end GPU optimization
  2. I went through the list of commands to start the ODTK docker, namely:
git clone https://github.com/nvidia/retinanet-examples
docker build -t odtk:latest retinanet-examples/
docker run --gpus all --rm --ipc=host -it -v /home/vast/retinanet:/workspace/model odtk:latest
  1. Then ran the following command inside the docker in the /workspace/model directory:
odtk export retinanet_rn18fpn.pth engine.plan
  1. I get an error here:
Loading model from retinanet_rn18fpn.pth...
     model: RetinaNet
  backbone: ResNet18FPN
   classes: 80, anchors: 9
Exporting to ONNX...
Building FP16 core model...
Segmentation fault (core dumped)

How to debug this error?

Environment

TensorRT Version: 7.2.2
GPU Type: M60 Tesla
Nvidia Driver Version:
CUDA Version: 11.2
CUDNN Version:
Operating System + Version: Ubuntu
Python Version (if applicable): 3.6
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag): Built docker image using the following repo: GitHub - NVIDIA/retinanet-examples: Fast and accurate object detection with end-to-end GPU optimization

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:
https://docs.nvidia.com/deeplearning/tensorrt/quick-start-guide/index.html#onnx-export

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.

In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

I do not have an ONNX file. The command:

odtk export retinanet_rn18fpn.pth engine.plan

converts the .pth into the engine.plan file. Although it says:

Exporting to ONNX…

I do not know in which directory the ONNX file is being dumped (if at all the ONNX file is being created).

Hi @subhankar.halder,

Please raise your concern on Issues · NVIDIA/retinanet-examples · GitHub to get better help.

Thank you.