Hi
I trained a yolo3 model wtih TF and then converted to uff format, placed to network folder. after that, I follow the github sample object detection (jetson-inference/detectnet-example-2.md at master · dusty-nv/jetson-inference · GitHub) , try to inference on jetson nano. But it is not work with error “detectNet invalid built-in network was requested”.
How I load a my trained model? below is my code.
==========================================================
import jetson.inference
import jetson.utils
import argparse
import sys
input_file = “./001.jpg”
output_file = “./001.out.jpg”
overlay = “boxes,scores,labels,num_detections”
load an image (into shared CPU/GPU memory)
img, width, height = jetson.utils.loadImageRGBA(input_file)
load the object detection network
network = jetson.inference.detectNet(“yolo3_tensorflow_model”, threshold=0.3)
detect objects in the image (with overlay)
detections = network.Detect(img, width, height, overlay)
print the detections
print(“detected {:d} objects in image”.format(len(detections)))
for detection in detections:
print(detection)
print out timing info
network.PrintProfilerTimes()
save the output image with the bounding box overlays
jetson.utils.saveImageRGBA(output_file, img, width, height)