How to detect with my model in TRT_object_detection

Description

https://github.com/AastaNV/TRT_object_detection#update-graphsurgeon-converter

I succeeded by following the TRT_object_detection script as it is. The ssd_mobilenet_v2_coco model was used here. I want to run the script with the model I retrained. I don’t know how. I retrain and make it to the “frozen_inference_graph.pb” file.

Environment

TensorRT Version: 5.0.6.3
GPU Type: jeson tx2
Nvidia Driver Version:
CUDA Version: 10.0
CUDNN Version: 7.3.1
Operating System + Version: jetpack4.2 + ubuntu 18.04
Python Version (if applicable): 3.6.9
TensorFlow Version (if applicable): 1.14.0
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Hi,
The config file should be slightly modified for different model.
Please refer below link:

Thanks

I know that.
I means I want to use a model that I’ve re-trained with my data.

Hi,

Can you try specifying your re-trained model path in the config file

Thanks

I use the re-trained ssd_mobilenet_v2 model.
path = ‘model/ssd_mobilenet_v2_coco_2018_03_29/frozen_inference_graph.pb’
I think you are saying that you should revise this part. But, I make a directory’s name like that, so I don’t need to modify it.

Yes, in that case the path doesn’t need to be updated