Own dataset(mobilenet ssd v2) not working - TRT Object Detection

Description

Hi there,
I followed tutorial(GitHub - AastaNV/TRT_object_detection: Python sample for referencing object detection model with TensorRT) in Jetson Nano and it worked well with ssd_mobilenet_v2_coco_2018_03_29 model.
but, it was not working with my own dataset(ssd mobilenet v2). → I trained one class(Face)

I am a beginner of Jetson Nano. Please provide a solution or tutorial to follow.
Thank you.

TRT_object_detection/config/model_ssd_mobilenet_v2_coco_2018_03_29.py
I modified path and numClasses.

#path = ‘model/ssd_mobilenet_v2_coco_2018_03_29/frozen_inference_graph.pb’
path = ‘/home/tn/ssd_mobilenet_v2_face/frozen_inference_graph.pb’

NMS = gs.create_plugin_node(
    name="NMS",
    op="NMS_TRT",
    shareLocation=1,
    varianceEncodedInTarget=0,
    backgroundLabelId=0,
    confidenceThreshold=1e-8,
    nmsThreshold=0.6,
    topK=100,
    keepTopK=100,
    numClasses=2,                                       
    inputOrder=[1, 0, 2],
    confSigmoid=1,
    isNormalized=1
)

TRT_object_detection/main.py
I got an error here.

with trt.Builder(TRT_LOGGER) as builder, builder.create_network() as network, trt.UffParser() as parser:
    builder.max_workspace_size = 1 << 28
    builder.max_batch_size = 1
    builder.fp16_mode = True

    parser.register_input('Input', model.dims)
    parser.register_output('MarkOutput_0')
    parser.parse('tmp.uff', network)
    engine = builder.build_cuda_engine(network)                   ==> here

    buf = engine.serialize()
    with open(model.TRTbin, 'wb') as f:
        f.write(buf)

Error

tn@tn-desktop:~/TRT_object_detection$ python3 main.py maksssksksss0.png
2021-03-10 14:52:32.047116: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.10.2
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
NOTE: UFF has been tested with TensorFlow 1.15.0.
WARNING: The version of TensorFlow installed on this system is not guaranteed to work with UFF.
UFF Version 0.6.9
=== Automatically deduced input nodes ===
[name: “Input”
op: “Placeholder”
attr {
key: “dtype”
value {
type: DT_FLOAT
}
}
attr {
key: “shape”
value {
shape {
dim {
size: 1
}
dim {
size: 3
}
dim {
size: 300
}
dim {
size: 300
}
}
}
}
]

=========================================

Using output node NMS
Converting to UFF graph
Warning: No conversion function registered for layer: NMS_TRT yet.
Converting NMS as custom op: NMS_TRT
WARNING:tensorflow:From /usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py:226: The name tf.AttrValue is deprecated. Please use tf.compat.v1.AttrValue instead.

Warning: No conversion function registered for layer: FlattenConcat_TRT yet.
Converting concat_box_conf as custom op: FlattenConcat_TRT
Warning: No conversion function registered for layer: GridAnchor_TRT yet.
Converting GridAnchor as custom op: GridAnchor_TRT
Warning: No conversion function registered for layer: FlattenConcat_TRT yet.
Converting concat_box_loc as custom op: FlattenConcat_TRT
DEBUG [/usr/lib/python3.6/dist-packages/uff/converters/tensorflow/converter.py:143] Marking [‘NMS’] as outputs
No. nodes: 644
UFF Output written to tmp.uff
UFF Text Output written to tmp.pbtxt
[TensorRT] INFO: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
[TensorRT] INFO: Detected 1 inputs and 2 output network tensors.
> #assertionnmsPlugin.cpp,246
> Aborted (core dumped)

Environment

Jetpack Version: 4.5
TensorRT Version: 7.1.3
CUDA Version: 10.2
Python Version (if applicable): 3.6
TensorFlow Version (if applicable): 1.15

Relevant Files

model

output_inference_graph_v1.pb.zip - Google Drive

TRT_object_detection
Jetpack Version 4.5 I modifed TRT_object_detection file.

TRT_object_detection.zip - Google Drive

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Hi,
Please refer to below links related custom plugin implementation and sample:
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/sampleOnnxMnistCoordConvAC

Thanks!

Hi @gustcool2,

Please refer the following link,

Thank you.

Thank you for reply.
Sorry, I confused you.
not custom mode ==> own dataset

I solved this issue.
I referred to ‘/usr/src/tensorrt/samples/sampleUffSSD’
and modified ‘TRT_object_detection/config/model_ssd_mobilenet_v2_coco_2018_03_29.py’

NMS = gs.create_plugin_node(
    name="NMS",
    op="NMS_TRT",
    shareLocation=1,
    varianceEncodedInTarget=0,
    backgroundLabelId=0,
    confidenceThreshold=1e-8,
    nmsThreshold=0.6,
    topK=100,
    keepTopK=100,
    numClasses=2,
    inputOrder=[0, 2, 1],                          ==> [1, 0, 2] -> [0, 2, 1]
    confSigmoid=1,
    isNormalized=1
)

I am curious what this means
thank you for reply.

1 Like