I’m trying to use inference API (detectnet.py) to run an object detection model ( “efficientdet_d1” model which was trained by Tensorflow object detection API 2 on a local machine) on jetson Nano. I gather that UFF conversation (uff.from_tensorflow_frozen_model) doesn’t support TF2 for direct conversion, so I’m following this pathway:
PB format—> ONNX format —> UFF
I could successfully generate ONNX format. However, using “trtexec” led to rasing below issue:
I was wondering if you have a solution.
Unsupported ONNX data type: UINT8 (2)
ERROR: ModelImporter.cpp:54 In function importInput:
[8] Assertion failed: convert_dtype(onnx_tensor_type.elem_type(), &trt_dtype)
[10/16/2020-14:30:38] [E] Failed to parse onnx file
[10/16/2020-14:30:38] [E] Parsing model failed
[10/16/2020-14:30:38] [E] Engine could not be created
&&&& FAILED TensorRT.trtexec # /usr/src/tensorrt/bin/trtexec --onnx=/media/pk/Data/Project/Project/Venice/Maet_stage1/efficientdet_d1_coco17_tpu-32/6class/ONNX/model_TRT.onnx
Could you please let me know how I can load a ONNX format with ‘jetson.inference.detectNet’. It’s only able to load a UFF format. When I try to load a ONNX model it gives the error below : ( IS there any tutorial explaining this?
detectNet -- loading detection network model from:
-- prototxt NULL
-- model /home/bl/Desktop/Workplace/projects/Meat_stag1/efficientdet_d1_coco17_tpu-32/ONNX/model_tf2.onnx
-- input_blob 'data'
-- output_cvg 'coverage'
-- output_bbox 'bboxes'
-- mean_pixel 0.000000
-- mean_binary NULL
-- class_labels NULL
-- threshold 0.500000
-- batch_size 1
[TRT] TensorRT version 7.1.3
[TRT] loading NVIDIA plugins...
[TRT] Registered plugin creator - ::GridAnchor_TRT version 1
[TRT] Registered plugin creator - ::NMS_TRT version 1
[TRT] Registered plugin creator - ::Reorg_TRT version 1
[TRT] Registered plugin creator - ::Region_TRT version 1
[TRT] Registered plugin creator - ::Clip_TRT version 1
[TRT] Registered plugin creator - ::LReLU_TRT version 1
[TRT] Registered plugin creator - ::PriorBox_TRT version 1
[TRT] Registered plugin creator - ::Normalize_TRT version 1
[TRT] Registered plugin creator - ::RPROI_TRT version 1
[TRT] Registered plugin creator - ::BatchedNMS_TRT version 1
[TRT] Could not register plugin creator - ::FlattenConcat_TRT version 1
[TRT] Registered plugin creator - ::CropAndResize version 1
[TRT] Registered plugin creator - ::DetectionLayer_TRT version 1
[TRT] Registered plugin creator - ::Proposal version 1
[TRT] Registered plugin creator - ::ProposalLayer_TRT version 1
[TRT] Registered plugin creator - ::PyramidROIAlign_TRT version 1
[TRT] Registered plugin creator - ::ResizeNearest_TRT version 1
[TRT] Registered plugin creator - ::Split version 1
[TRT] Registered plugin creator - ::SpecialSlice_TRT version 1
[TRT] Registered plugin creator - ::InstanceNormalization_TRT version 1
[TRT] detected model format - ONNX (extension '.onnx')
[TRT] desired precision specified for GPU: FASTEST
[TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT] native precisions detected for GPU: FP32, FP16
[TRT] selecting fastest native precision for GPU: FP16
[TRT] attempting to open engine cache file /home/bl/Desktop/Workplace/projects/Meat_stag1/efficientdet_d1_coco17_tpu-32/ONNX/model_tf2.onnx.1.1.7103.GPU.FP16.engine
[TRT] cache file not found, profiling network model on device GPU
[TRT] device GPU, loading /usr/bin/ /home/bl/Desktop/Workplace/projects/Meat_stag1/efficientdet_d1_coco17_tpu-32/ONNX/model_tf2.onnx
----------------------------------------------------------------
Input filename: /home/bl/Desktop/Workplace/projects/Meat_stag1/efficientdet_d1_coco17_tpu-32/ONNX/model_tf2.onnx
ONNX IR version: 0.0.6
Opset version: 11
Producer name: tf2onnx
Producer version: 1.8.0
Domain:
Model version: 0
Doc string:
----------------------------------------------------------------
[TRT] Plugin creator already registered - ::GridAnchor_TRT version 1
[TRT] Plugin creator already registered - ::NMS_TRT version 1
[TRT] Plugin creator already registered - ::Reorg_TRT version 1
[TRT] Plugin creator already registered - ::Region_TRT version 1
[TRT] Plugin creator already registered - ::Clip_TRT version 1
[TRT] Plugin creator already registered - ::LReLU_TRT version 1
[TRT] Plugin creator already registered - ::PriorBox_TRT version 1
[TRT] Plugin creator already registered - ::Normalize_TRT version 1
[TRT] Plugin creator already registered - ::RPROI_TRT version 1
[TRT] Plugin creator already registered - ::BatchedNMS_TRT version 1
[TRT] Could not register plugin creator - ::FlattenConcat_TRT version 1
[TRT] Plugin creator already registered - ::CropAndResize version 1
[TRT] Plugin creator already registered - ::DetectionLayer_TRT version 1
[TRT] Plugin creator already registered - ::Proposal version 1
[TRT] Plugin creator already registered - ::ProposalLayer_TRT version 1
[TRT] Plugin creator already registered - ::PyramidROIAlign_TRT version 1
[TRT] Plugin creator already registered - ::ResizeNearest_TRT version 1
[TRT] Plugin creator already registered - ::Split version 1
[TRT] Plugin creator already registered - ::SpecialSlice_TRT version 1
[TRT] Plugin creator already registered - ::InstanceNormalization_TRT version 1
Unsupported ONNX data type: UINT8 (2)
ERROR: input_tensor:0:188 In function importInput:
[8] Assertion failed: convertDtype(onnxDtype.elem_type(), &trtDtype)
[TRT] failed to parse ONNX model '/home/bl/Desktop/Workplace/projects/Meat_stag1/efficientdet_d1_coco17_tpu-32/ONNX/model_tf2.onnx'
[TRT] device GPU, failed to load /home/bl/Desktop/Workplace/projects/Meat_stag1/efficientdet_d1_coco17_tpu-32/ONNX/model_tf2.onnx
[TRT] detectNet -- failed to initialize.
jetson.inference -- detectNet failed to load network
Traceback (most recent call last):
File "detectnet-camera.py", line 49, in <module>
net = jetson.inference.detectNet(opt.network, sys.argv, opt.threshold)
Exception: jetson.inference -- detectNet failed to load network
Update on the last progress: @AastaLLL I followed the instruction for changing INT8 to Float32, I’ve got this error using modified ONNX format ( withFloat32) on both ‘jetson.inference.detectNet’ and “trtexec” execution:
ERROR: builtin_op_importers.cpp:1554 In function importIf:
[8] Assertion failed: cond.is_weights() && cond.weights().count() == 1 && “If condition must be a initializer!”
[TRT] failed to parse ONNX model ‘/home/bl/Desktop/Workplace/projects/Meat_stag1/efficientdet_d1_coco17_tpu-32/ONNX/modifed_model_TRT.onnx’
[TRT] device GPU, failed to load /home/bl/Desktop/Workplace/projects/Meat_stag1/efficientdet_d1_coco17_tpu-32/ONNX/modifed_model_TRT.onnx
[TRT] detectNet – failed to initialize.
jetson.inference – detectNet failed to load network
Traceback (most recent call last):
File “detectnet-camera.py”, line 49, in
net = jetson.inference.detectNet(opt.network, sys.argv, opt.threshold)
Exception: jetson.inference – detectNet failed to load network
Input filename: /media/pk/Data/Project/Project/Venice/Maet_stage1/efficientdet_d1_coco17_tpu-32/6class/ONNX/modifed_model_TRT.onnx
ONNX IR version: 0.0.7
Opset version: 11
Producer name:
Producer version:
Domain:
Model version: 0
Doc string:
WARNING: ONNX model has a newer ir_version (0.0.7) than this parser was built against (0.0.3).
[10/17/2020-14:56:10] [E] [TRT] Parameter check failed at: …/builder/Network.cpp::addInput::671, condition: isValidDims(dims, hasImplicitBatchDimension())
ERROR: ModelImporter.cpp:80 In function importInput:
[8] Assertion failed: *tensor = importer_ctx->network()->addInput( input.name().c_str(), trt_dtype, trt_dims)
[10/17/2020-14:56:10] [E] Failed to parse onnx file
[10/17/2020-14:56:10] [E] Parsing model failed
[10/17/2020-14:56:10] [E] Engine could not be created
&&&& FAILED TensorRT.trtexec # /usr/src/tensorrt/bin/trtexec --onnx=/media/pk/Data/Project/Project/Venice/Maet_stage1/efficientdet_d1_coco17_tpu-32/6class/ONNX/modifed_model_TRT.onnx --verbose