Convert Tensoflow V2 model ( pb format) to UFF

I’m trying to use inference API (detectnet.py) to run an object detection model ( “efficientdet_d1” model which was trained by Tensorflow object detection API 2 on a local machine) on jetson Nano. I gather that UFF conversation (uff.from_tensorflow_frozen_model) doesn’t support TF2 for direct conversion, so I’m following this pathway:

PB format—> ONNX format —> UFF

I could successfully generate ONNX format. However, using “trtexec” led to rasing below issue:

I was wondering if you have a solution.

Unsupported ONNX data type: UINT8 (2)
ERROR: ModelImporter.cpp:54 In function importInput:
[8] Assertion failed: convert_dtype(onnx_tensor_type.elem_type(), &trt_dtype)
[10/16/2020-14:30:38] [E] Failed to parse onnx file
[10/16/2020-14:30:38] [E] Parsing model failed
[10/16/2020-14:30:38] [E] Engine could not be created
&&&& FAILED TensorRT.trtexec # /usr/src/tensorrt/bin/trtexec --onnx=/media/pk/Data/Project/Project/Venice/Maet_stage1/efficientdet_d1_coco17_tpu-32/6class/ONNX/model_TRT.onnx

Hi,

TensorRT can support ONNX format directly, you don’t need to convert it back to UFF.

To fix the type non-supported issue, please check below topic for information:

Thanks.

Thanks for your prompt response.

Could you please let me know how I can load a ONNX format with ‘jetson.inference.detectNet’. It’s only able to load a UFF format. When I try to load a ONNX model it gives the error below : ( IS there any tutorial explaining this?

detectNet -- loading detection network model from:
          -- prototxt     NULL
          -- model        /home/bl/Desktop/Workplace/projects/Meat_stag1/efficientdet_d1_coco17_tpu-32/ONNX/model_tf2.onnx
          -- input_blob   'data'
          -- output_cvg   'coverage'
          -- output_bbox  'bboxes'
          -- mean_pixel   0.000000
          -- mean_binary  NULL
          -- class_labels NULL
          -- threshold    0.500000
          -- batch_size   1

[TRT]    TensorRT version 7.1.3
[TRT]    loading NVIDIA plugins...
[TRT]    Registered plugin creator - ::GridAnchor_TRT version 1
[TRT]    Registered plugin creator - ::NMS_TRT version 1
[TRT]    Registered plugin creator - ::Reorg_TRT version 1
[TRT]    Registered plugin creator - ::Region_TRT version 1
[TRT]    Registered plugin creator - ::Clip_TRT version 1
[TRT]    Registered plugin creator - ::LReLU_TRT version 1
[TRT]    Registered plugin creator - ::PriorBox_TRT version 1
[TRT]    Registered plugin creator - ::Normalize_TRT version 1
[TRT]    Registered plugin creator - ::RPROI_TRT version 1
[TRT]    Registered plugin creator - ::BatchedNMS_TRT version 1
[TRT]    Could not register plugin creator -  ::FlattenConcat_TRT version 1
[TRT]    Registered plugin creator - ::CropAndResize version 1
[TRT]    Registered plugin creator - ::DetectionLayer_TRT version 1
[TRT]    Registered plugin creator - ::Proposal version 1
[TRT]    Registered plugin creator - ::ProposalLayer_TRT version 1
[TRT]    Registered plugin creator - ::PyramidROIAlign_TRT version 1
[TRT]    Registered plugin creator - ::ResizeNearest_TRT version 1
[TRT]    Registered plugin creator - ::Split version 1
[TRT]    Registered plugin creator - ::SpecialSlice_TRT version 1
[TRT]    Registered plugin creator - ::InstanceNormalization_TRT version 1
[TRT]    detected model format - ONNX  (extension '.onnx')
[TRT]    desired precision specified for GPU: FASTEST
[TRT]    requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT]    native precisions detected for GPU:  FP32, FP16
[TRT]    selecting fastest native precision for GPU:  FP16
[TRT]    attempting to open engine cache file /home/bl/Desktop/Workplace/projects/Meat_stag1/efficientdet_d1_coco17_tpu-32/ONNX/model_tf2.onnx.1.1.7103.GPU.FP16.engine
[TRT]    cache file not found, profiling network model on device GPU
[TRT]    device GPU, loading /usr/bin/ /home/bl/Desktop/Workplace/projects/Meat_stag1/efficientdet_d1_coco17_tpu-32/ONNX/model_tf2.onnx
----------------------------------------------------------------
Input filename:   /home/bl/Desktop/Workplace/projects/Meat_stag1/efficientdet_d1_coco17_tpu-32/ONNX/model_tf2.onnx
ONNX IR version:  0.0.6
Opset version:    11
Producer name:    tf2onnx
Producer version: 1.8.0
Domain:           
Model version:    0
Doc string:       
----------------------------------------------------------------
[TRT]    Plugin creator already registered - ::GridAnchor_TRT version 1
[TRT]    Plugin creator already registered - ::NMS_TRT version 1
[TRT]    Plugin creator already registered - ::Reorg_TRT version 1
[TRT]    Plugin creator already registered - ::Region_TRT version 1
[TRT]    Plugin creator already registered - ::Clip_TRT version 1
[TRT]    Plugin creator already registered - ::LReLU_TRT version 1
[TRT]    Plugin creator already registered - ::PriorBox_TRT version 1
[TRT]    Plugin creator already registered - ::Normalize_TRT version 1
[TRT]    Plugin creator already registered - ::RPROI_TRT version 1
[TRT]    Plugin creator already registered - ::BatchedNMS_TRT version 1
[TRT]    Could not register plugin creator -  ::FlattenConcat_TRT version 1
[TRT]    Plugin creator already registered - ::CropAndResize version 1
[TRT]    Plugin creator already registered - ::DetectionLayer_TRT version 1
[TRT]    Plugin creator already registered - ::Proposal version 1
[TRT]    Plugin creator already registered - ::ProposalLayer_TRT version 1
[TRT]    Plugin creator already registered - ::PyramidROIAlign_TRT version 1
[TRT]    Plugin creator already registered - ::ResizeNearest_TRT version 1
[TRT]    Plugin creator already registered - ::Split version 1
[TRT]    Plugin creator already registered - ::SpecialSlice_TRT version 1
[TRT]    Plugin creator already registered - ::InstanceNormalization_TRT version 1
Unsupported ONNX data type: UINT8 (2)
ERROR: input_tensor:0:188 In function importInput:
[8] Assertion failed: convertDtype(onnxDtype.elem_type(), &trtDtype)
[TRT]    failed to parse ONNX model '/home/bl/Desktop/Workplace/projects/Meat_stag1/efficientdet_d1_coco17_tpu-32/ONNX/model_tf2.onnx'
[TRT]    device GPU, failed to load /home/bl/Desktop/Workplace/projects/Meat_stag1/efficientdet_d1_coco17_tpu-32/ONNX/model_tf2.onnx
[TRT]    detectNet -- failed to initialize.
jetson.inference -- detectNet failed to load network
Traceback (most recent call last):
  File "detectnet-camera.py", line 49, in <module>
    net = jetson.inference.detectNet(opt.network, sys.argv, opt.threshold)
Exception: jetson.inference -- detectNet failed to load network

Update on the last progress:
@AastaLLL I followed the instruction for changing INT8 to Float32, I’ve got this error using modified ONNX format ( withFloat32) on both ‘jetson.inference.detectNet’ and “trtexec” execution:

ERROR: builtin_op_importers.cpp:1554 In function importIf:
[8] Assertion failed: cond.is_weights() && cond.weights().count() == 1 && “If condition must be a initializer!”
[TRT] failed to parse ONNX model ‘/home/bl/Desktop/Workplace/projects/Meat_stag1/efficientdet_d1_coco17_tpu-32/ONNX/modifed_model_TRT.onnx’
[TRT] device GPU, failed to load /home/bl/Desktop/Workplace/projects/Meat_stag1/efficientdet_d1_coco17_tpu-32/ONNX/modifed_model_TRT.onnx
[TRT] detectNet – failed to initialize.
jetson.inference – detectNet failed to load network
Traceback (most recent call last):
File “detectnet-camera.py”, line 49, in
net = jetson.inference.detectNet(opt.network, sys.argv, opt.threshold)
Exception: jetson.inference – detectNet failed to load network

Hi,

Could you run the modified model via trtexec with --verbose and share the log with us?

Thanks.

Thanks @AastaLLL.

This is the output log running with --verbose

/usr/src/tensorrt/bin/trtexec --onnx=/media/pk/Data/Project/Project/Venice/Maet_stage1/efficientdet_d1_coco17_tpu-32/6class/ONNX/modifed_model_TRT.onnx --verbose
&&&& RUNNING TensorRT.trtexec # /usr/src/tensorrt/bin/trtexec --onnx=/media/pk/Data/Project/Project/Venice/Maet_stage1/efficientdet_d1_coco17_tpu-32/6class/ONNX/modifed_model_TRT.onnx --verbose
[10/17/2020-14:56:09] [I] === Model Options ===
[10/17/2020-14:56:09] [I] Format: ONNX
[10/17/2020-14:56:09] [I] Model: /media/pk/Data/Project/Project/Venice/Maet_stage1/efficientdet_d1_coco17_tpu-32/6class/ONNX/modifed_model_TRT.onnx
[10/17/2020-14:56:09] [I] Output:
[10/17/2020-14:56:09] [I] === Build Options ===
[10/17/2020-14:56:09] [I] Max batch: 1
[10/17/2020-14:56:09] [I] Workspace: 16 MB
[10/17/2020-14:56:09] [I] minTiming: 1
[10/17/2020-14:56:09] [I] avgTiming: 8
[10/17/2020-14:56:09] [I] Precision: FP32
[10/17/2020-14:56:09] [I] Calibration:
[10/17/2020-14:56:09] [I] Safe mode: Disabled
[10/17/2020-14:56:09] [I] Save engine:
[10/17/2020-14:56:09] [I] Load engine:
[10/17/2020-14:56:09] [I] Inputs format: fp32:CHW
[10/17/2020-14:56:09] [I] Outputs format: fp32:CHW
[10/17/2020-14:56:09] [I] Input build shapes: model
[10/17/2020-14:56:09] [I] === System Options ===
[10/17/2020-14:56:09] [I] Device: 0
[10/17/2020-14:56:09] [I] DLACore:
[10/17/2020-14:56:09] [I] Plugins:
[10/17/2020-14:56:09] [I] === Inference Options ===
[10/17/2020-14:56:09] [I] Batch: 1
[10/17/2020-14:56:09] [I] Iterations: 10 (200 ms warm up)
[10/17/2020-14:56:09] [I] Duration: 10s
[10/17/2020-14:56:09] [I] Sleep time: 0ms
[10/17/2020-14:56:09] [I] Streams: 1
[10/17/2020-14:56:09] [I] Spin-wait: Disabled
[10/17/2020-14:56:09] [I] Multithreading: Enabled
[10/17/2020-14:56:09] [I] CUDA Graph: Disabled
[10/17/2020-14:56:09] [I] Skip inference: Disabled
[10/17/2020-14:56:09] [I] Input inference shapes: model
[10/17/2020-14:56:09] [I] === Reporting Options ===
[10/17/2020-14:56:09] [I] Verbose: Enabled
[10/17/2020-14:56:09] [I] Averages: 10 inferences
[10/17/2020-14:56:09] [I] Percentile: 99
[10/17/2020-14:56:09] [I] Dump output: Disabled
[10/17/2020-14:56:09] [I] Profile: Disabled
[10/17/2020-14:56:09] [I] Export timing to JSON file:
[10/17/2020-14:56:09] [I] Export profile to JSON file:
[10/17/2020-14:56:09] [I]
[10/17/2020-14:56:09] [V] [TRT] Plugin Creator registration succeeded - GridAnchor_TRT
[10/17/2020-14:56:09] [V] [TRT] Plugin Creator registration succeeded - NMS_TRT
[10/17/2020-14:56:09] [V] [TRT] Plugin Creator registration succeeded - Reorg_TRT
[10/17/2020-14:56:09] [V] [TRT] Plugin Creator registration succeeded - Region_TRT
[10/17/2020-14:56:09] [V] [TRT] Plugin Creator registration succeeded - Clip_TRT
[10/17/2020-14:56:09] [V] [TRT] Plugin Creator registration succeeded - LReLU_TRT
[10/17/2020-14:56:09] [V] [TRT] Plugin Creator registration succeeded - PriorBox_TRT
[10/17/2020-14:56:09] [V] [TRT] Plugin Creator registration succeeded - Normalize_TRT
[10/17/2020-14:56:09] [V] [TRT] Plugin Creator registration succeeded - RPROI_TRT
[10/17/2020-14:56:09] [V] [TRT] Plugin Creator registration succeeded - BatchedNMS_TRT
[10/17/2020-14:56:09] [V] [TRT] Plugin Creator registration succeeded - FlattenConcat_TRT

Input filename: /media/pk/Data/Project/Project/Venice/Maet_stage1/efficientdet_d1_coco17_tpu-32/6class/ONNX/modifed_model_TRT.onnx
ONNX IR version: 0.0.7
Opset version: 11
Producer name:
Producer version:
Domain:
Model version: 0
Doc string:

WARNING: ONNX model has a newer ir_version (0.0.7) than this parser was built against (0.0.3).
[10/17/2020-14:56:10] [E] [TRT] Parameter check failed at: …/builder/Network.cpp::addInput::671, condition: isValidDims(dims, hasImplicitBatchDimension())
ERROR: ModelImporter.cpp:80 In function importInput:
[8] Assertion failed: *tensor = importer_ctx->network()->addInput( input.name().c_str(), trt_dtype, trt_dims)
[10/17/2020-14:56:10] [E] Failed to parse onnx file
[10/17/2020-14:56:10] [E] Parsing model failed
[10/17/2020-14:56:10] [E] Engine could not be created
&&&& FAILED TensorRT.trtexec # /usr/src/tensorrt/bin/trtexec --onnx=/media/pk/Data/Project/Project/Venice/Maet_stage1/efficientdet_d1_coco17_tpu-32/6class/ONNX/modifed_model_TRT.onnx --verbose

Hi,

Sorry for the late update.

Could you try the suggestion in the below comment first:

Thanks.