Efficientdet-d0 to tensorrt problem



I trained a efficientdet-d0 network with tensorflow 2.15. Exporting it to tflite is working.
According to:

I exported it:
!python /content/models/research/object_detection/exporter_main_v2.py
–input_type image_tensor
–trained_checkpoint_dir {last_model_path}
–output_directory {output_directory_exporter_main_v2}
–pipeline_config_path {pipeline_file}

Next Step is:
python create_onnx.py --input_size ‘512,512’ --saved_model /home/jetson/saved_model/ --onnx onnx.onnx

And the last lines are:
INFO:EfficientDetGraphSurgeon:Updating Reshape node StatefulPartitionedCall/Postprocessor/Reshape_2 to [ -1 49104 4]
INFO:EfficientDetGraphSurgeon:Found Concat node ‘StatefulPartitionedCall/concat_1’ as the tip of /WeightSharedConvolutionalClassHead/
INFO:EfficientDetGraphSurgeon:Found Concat node ‘StatefulPartitionedCall/concat’ as the tip of /WeightSharedConvolutionalBoxHead/
Traceback (most recent call last):
File “/usr/src/tensorrt/samples/python/efficientdet/create_onnx.py”, line 454, in
File “/usr/src/tensorrt/samples/python/efficientdet/create_onnx.py”, line 427, in main
effdet_gs.update_nms(args.nms_threshold, args.nms_detections)
File “/usr/src/tensorrt/samples/python/efficientdet/create_onnx.py”, line 379, in update_nms
anchors_tensor = extract_anchors_tensor(box_net_split)
File “/usr/src/tensorrt/samples/python/efficientdet/create_onnx.py”, line 322, in extract_anchors_tensor
anchors = np.concatenate([anchors_y, anchors_x, anchors_h, anchors_w], axis=2)
File “<array_function internals>”, line 180, in concatenate
ValueError: all the input array dimensions for the concatenation axis must match exactly, but along dimension 1, the array at index 0 has size 49104 and the array at index 2 has size 1

What am I doing wrong?



TensorRT Version: 8.6.2:
GPU Type Orin Nano:
Nvidia Driver Version Jetpack 6.0 DP:
CUDA Version:
CUDNN Version:
Operating System + Version:
Python Version (if applicable):
**TensorFlow Version (if applicable)
create_onnx.log (9.5 KB)
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Hi @s.lerch ,
Can you pls check if you have a valid onnx?
import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)

Any luck solving this? I get the same error. The onnx model is created but the output of the above check shows:

onnx.onnx_cpp2py_export.checker.ValidationError: Field ‘shape’ of ‘type’ is required but missing.

Still trying…
Did you use tf2onnx.convert?

This is exactly my problem.
It generates no ONNX File.
It seems that when I set the parameters for the rcnn_box_coder to 1.0 in the pipeline_file.config, I at least get an ONNX file. However, onnx.checker.check_model(model) shows that the model is not valid.

I followed instructions here: TensorRT/samples/python/efficientdet at release/8.6 · NVIDIA/TensorRT · GitHub
create_onnx.py works for the TFOD models downloaded directly and exported, but not after I retrain them

So i did. The conversion to TRT has only worked for me once so far. I achieved it by setting the values for the faster_rcnn_box_coder to 1. Quite questionable and more of a desperate move.

Link to my files:

Thanks, I tried setting faster_rcnn_box_coder to 1 in my config when exporting the model and then create_onnx.py seemed to work, but building the trt engine fails with this error:

ERROR:EngineBuilder:In node -1 (importModel): INVALID_VALUE: Assertion failed: !_importer_ctx.network()->hasImplicitBatchDimension() && “This version of the ONNX parser only supports TensorRT INetworkDefinitions with an explicit batch dimension. Please ensure the network was created using the EXPLICIT_BATCH NetworkDefinitionCreationFlag.”

I think EXPLICIT_BATCH NetworkDefinitionCreationFlag means this:

import tensorrt
explicit_batch = 1 << (int)(tensorrt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH)