How to inference with my own model

Hi

I trained a yolo3 model wtih TF and then converted to uff format, placed to network folder. after that, I follow the github sample object detection (https://github.com/dusty-nv/jetson-inference/blob/master/docs/detectnet-example-2.md) , try to inference on jetson nano. But it is not work with error “detectNet invalid built-in network was requested”.
How I load a my trained model? below is my code.

==========================================================
import jetson.inference
import jetson.utils
import argparse
import sys

input_file = “./001.jpg”
output_file = “./001.out.jpg”
overlay = “boxes,scores,labels,num_detections”

load an image (into shared CPU/GPU memory)

img, width, height = jetson.utils.loadImageRGBA(input_file)

load the object detection network

network = jetson.inference.detectNet(“yolo3_tensorflow_model”, threshold=0.3)

detect objects in the image (with overlay)

detections = network.Detect(img, width, height, overlay)

print the detections

print(“detected {:d} objects in image”.format(len(detections)))
for detection in detections:
print(detection)

print out timing info

network.PrintProfilerTimes()

save the output image with the bounding box overlays

jetson.utils.saveImageRGBA(output_file, img, width, height)

Hi guo.feng, YOLO will probably require some changes do the detectNet post-processing source (c/detectNet.cpp) so that it can correctly interpret the outputs of YOLO network. For UFF, it is configured for SSD-Mobilenet/SSD-Inception formats.