Mr.Dusty
The DetectNet work error with ONNX model, I retrained the ssd-mobilev2-net to detect fruit (i.e. eight fruit object classes trained for one epoch with 6500 images in the dataset), when the .py running ,it showed as fllow:
device GPU, /home/a508/workspace/jetson-inference/python/training/detection/ssd/models/fruit/ssd-mobilenet.onnx initialized.
detectNet – using ONNX model
detectNet – maximum bounding boxes: 1
detectNet – loaded 9 class info entries
detectNet – number of object classes: 1
But i have 8 classes, it only show one and the web-camera detect noting but BACKGROUND.
This is my labels:
BACKGROUND
Apple
Banana
Grape
Orange
Pear
Pineapple
Strawberry
Watermelon
Here is the log:
jetson.inference. init .py
jetson.inference – initializing Python 3.6 bindings…
jetson.inference – registering module types…
jetson.inference – done registering module types
jetson.inference – done Python 3.6 binding initialization
jetson.utils. init .py
jetson.utils – initializing Python 3.6 bindings…
jetson.utils – registering module functions…
jetson.utils – done registering module functions
jetson.utils – registering module types…
jetson.utils – done registering module types
jetson.utils – done Python 3.6 binding initialization
jetson.inference – PyTensorNet_New()
jetson.inference – PyDetectNet_Init()
jetson.inference – detectNet loading network using argv command line params
jetson.inference – detectNet. init () argv[0] = ‘–model=/home/a508/workspace/jetson-inference/python/training/detection/ssd/models/fruit/ssd-mobilenet.onnx’
jetson.inference – detectNet. init () argv[1] = ‘–class_labels=/home/a508/workspace/jetson-inference/python/training/detection/ssd/models/fruit/labels.txt’
jetson.inference – detectNet. init () argv[2] = ‘–threshold=0.1’
jetson.inference – detectNet. init () argv[3] = ‘–input_blob=input_0’
jetson.inference – detectNet. init () argv[4] = ‘–output_cvg=scores’
jetson.inference – detectNet. init () argv[5] = ‘–output_bbox=boxes’
detectNet – loading detection network model from:
– prototxt NULL
– model /home/a508/workspace/jetson-inference/python/training/detection/ssd/models/fruit/ssd-mobilenet.onnx
– input_blob ‘input_0’
– output_cvg ‘scores’
– output_bbox ‘boxes’
– mean_pixel 0.000000
– mean_binary NULL
– class_labels /home/a508/workspace/jetson-inference/python/training/detection/ssd/models/fruit/labels.txt
– threshold 0.100000
– batch_size 1
[TRT] TensorRT version 7.1.3
[TRT] loading NVIDIA plugins…
[TRT] Registered plugin creator - ::GridAnchor_TRT version 1
[TRT] Registered plugin creator - ::NMS_TRT version 1
[TRT] Registered plugin creator - ::Reorg_TRT version 1
[TRT] Registered plugin creator - ::Region_TRT version 1
[TRT] Registered plugin creator - ::Clip_TRT version 1
[TRT] Registered plugin creator - ::LReLU_TRT version 1
[TRT] Registered plugin creator - ::PriorBox_TRT version 1
[TRT] Registered plugin creator - ::Normalize_TRT version 1
[TRT] Registered plugin creator - ::RPROI_TRT version 1
[TRT] Registered plugin creator - ::BatchedNMS_TRT version 1
[TRT] Could not register plugin creator - ::FlattenConcat_TRT version 1
[TRT] Registered plugin creator - ::CropAndResize version 1
[TRT] Registered plugin creator - ::DetectionLayer_TRT version 1
[TRT] Registered plugin creator - ::Proposal version 1
[TRT] Registered plugin creator - ::ProposalLayer_TRT version 1
[TRT] Registered plugin creator - ::PyramidROIAlign_TRT version 1
[TRT] Registered plugin creator - ::ResizeNearest_TRT version 1
[TRT] Registered plugin creator - ::Split version 1
[TRT] Registered plugin creator - ::SpecialSlice_TRT version 1
[TRT] Registered plugin creator - ::InstanceNormalization_TRT version 1
[TRT] completed loading NVIDIA plugins.
[TRT] detected model format - ONNX (extension ‘.onnx’)
[TRT] desired precision specified for GPU: FASTEST
[TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT] native precisions detected for GPU: FP32, FP16
[TRT] selecting fastest native precision for GPU: FP16
[TRT] attempting to open engine cache file /home/a508/workspace/jetson-inference/python/training/detection/ssd/models/fruit/ssd-mobilenet.onnx.1.1.7103.GPU.FP16.engine
[TRT] loading network profile from engine cache… /home/a508/workspace/jetson-inference/python/training/detection/ssd/models/fruit/ssd-mobilenet.onnx.1.1.7103.GPU.FP16.engine
[TRT] device GPU, /home/a508/workspace/jetson-inference/python/training/detection/ssd/models/fruit/ssd-mobilenet.onnx loaded
[TRT] Deserialize required 5069869 microseconds.
[TRT] device GPU, CUDA engine context initialized with 3 bindings
[TRT] binding – index 0
– name ‘input_0’
– type FP32
– in/out INPUT
– # dims 4
– dim #0 1 (SPATIAL)
– dim #1 3 (SPATIAL)
– dim #2 300 (SPATIAL)
– dim #3 300 (SPATIAL)
[TRT] binding – index 1
– name ‘scores’
– type FP32
– in/out OUTPUT
– # dims 3
– dim #0 1 (SPATIAL)
– dim #1 3000 (SPATIAL)
– dim #2 9 (SPATIAL)
[TRT] binding – index 2
– name ‘boxes’
– type FP32
– in/out OUTPUT
– # dims 3
– dim #0 1 (SPATIAL)
– dim #1 3000 (SPATIAL)
– dim #2 4 (SPATIAL)
[TRT] binding to input 0 input_0 binding index: 0
[TRT] binding to input 0 input_0 dims (b=1 c=3 h=300 w=300) size=1080000
[TRT] binding to output 0 scores binding index: 1
[TRT] binding to output 0 scores dims (b=1 c=3000 h=9 w=1) size=108000
[TRT] binding to output 1 boxes binding index: 2
[TRT] binding to output 1 boxes dims (b=1 c=3000 h=4 w=1) size=48000
device GPU, /home/a508/workspace/jetson-inference/python/training/detection/ssd/models/fruit/ssd-mobilenet.onnx initialized.
detectNet – using ONNX model
detectNet – maximum bounding boxes: 1
detectNet – loaded 9 class info entries
detectNet – number of object classes: 1