How to inference with my own model


I trained a yolo3 model wtih TF and then converted to uff format, placed to network folder. after that, I follow the github sample object detection ( , try to inference on jetson nano. But it is not work with error “detectNet invalid built-in network was requested”.
How I load a my trained model? below is my code.

import jetson.inference
import jetson.utils
import argparse
import sys

input_file = “./001.jpg”
output_file = “./001.out.jpg”
overlay = “boxes,scores,labels,num_detections”

load an image (into shared CPU/GPU memory)

img, width, height = jetson.utils.loadImageRGBA(input_file)

load the object detection network

network = jetson.inference.detectNet(“yolo3_tensorflow_model”, threshold=0.3)

detect objects in the image (with overlay)

detections = network.Detect(img, width, height, overlay)

print the detections

print(“detected {:d} objects in image”.format(len(detections)))
for detection in detections:

print out timing info


save the output image with the bounding box overlays

jetson.utils.saveImageRGBA(output_file, img, width, height)

Moving this to the Jetson Nano forum so the Jetson team can take a look.

Repost of:

Let’s keep discussion of this topic there in that post, thanks.