TensorRT python efficientdet sample question

Hi, I was working with the efficientdet create_onnx.py script and i noticed that this line doesn’t get run if you run exporter_main_v2.py (as per the TFOD instructions). The line does run if you use the saved_model directly from training (without using the exporter).
Fortunately the resulting onnx still seems to work fine. I was wondering if nodes are not disconnected this may have some effect on model speed?

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

To test, you can just download the efficientdet d0 model from models/tf2_detection_zoo.md at master · tensorflow/models · GitHub

and follow the instructions for TFOD models here TensorRT/samples/python/efficientdet at master · NVIDIA/TensorRT · GitHub

You will find that if the saved_model has been exported (converted to an inference model) using exporter_main_v2.py, then line 146 of create_onnx.py does not ever get run.
(It does get run if you use the saved_model directly from the TFOD zoo, but this model is not optimized for inference)

Hi,

We recommend you to please post your concern on Issues · NVIDIA/TensorRT · GitHub to get better help.

Thank you.