I’ve also checked the Jetbot where I believe there is some inference result triggered action, for sending a metric message to AWS-Cloudwatch, I can do something similar to update() method here?
def update(change):
global blocked_slider, robot
x = change['new']
x = preprocess(x)
y = model(x)
# we apply the `softmax` function to normalize the output vector so it sums to 1 (which makes it a probability distribution)
y = F.softmax(y, dim=1)
prob_blocked = float(y.flatten()[0])
blocked_slider.value = prob_blocked
if prob_blocked < 0.5:
robot.forward(speed_slider.value)
else:
robot.left(speed_slider.value)
time.sleep(0.001)
update({'new': camera.value}) # we call the function once to initialize
Isn’t CloudWatch for monitoring your AWS resources? Maybe you want to run a Lambda function? I would start by writing the logic for that, whatever it is, in python.
from jetson_inference import detectNet
parser.add_argument("--network", type=str, default="ssd-mobilenet-v2", help="pre-trained model to load (see below for options)")
# load the object detection network
net = detectNet(args.network, sys.argv, args.threshold)
So to plugin my re-trained model to detectNet as parameter, after re-training and converting to onnx format python3 onnx_export.py --model-dir=models/fruit, I can give my re-trained onnx model as parameter:
# note: to hard-code the paths to load a model, the following API can be used:
net = detectNet(model="model/ssd-mobilenet.onnx", labels="model/labels.txt",
input_blob="input_0", output_cvg="scores", output_bbox="boxes",
threshold=args.threshold)
Hi @renxin.ubc, yes that is correct, detectnet.py can run either pre-trained or custom models (that you trained with train_ssd.py). detectnet.py can load your custom models either using the hard-coded function you have above, or with command-line syntax like shown here.
Then you can add your own actions/triggers inside the main loop like this:
while True:
# capture the next image
img = input.Capture()
# detect objects in the image (with overlay)
detections = net.Detect(img, overlay=args.overlay)
for detection in detections:
if net.GetClassDesc(detection.ClassID) == 'person':
# perform a custom action
print('detected a person!')
I modified detectnet.py in build folder with path ~/jetson-inference/build/aarch64/bin:
input/output, from original reading from input parameters to hard-coded device path of my jetson-nano setup
add print line print("Xin is about to do something on the inference dectection results") after inference done
# create video sources and outputs
#input = videoSource(args.input_URI, argv=sys.argv)
#output = videoOutput(args.output_URI, argv=sys.argv+is_headless)
input = videoSource("/dev/video0")
output = videoOutput("display://0")
......
# detect objects in the image (with overlay)
detections = net.Detect(img, overlay=args.overlay)
for detection in detections:
print(detection)
print("Xin is about to do something on the inference dectection results")
# render the image
output.Render(img)
And then when I run it, I got the print line I added
xin@xin-desktop:~/jetson-inference/build/aarch64/bin$ ./detectnet.py
[TRT] ------------------------------------------------
[TRT] Timing Report networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff
[TRT] ------------------------------------------------
[TRT] Pre-Process CPU 0.07172ms CUDA 0.83979ms
[TRT] Network CPU 53.65486ms CUDA 42.77724ms
[TRT] Post-Process CPU 0.04636ms CUDA 0.04724ms
[TRT] Visualize CPU 0.24578ms CUDA 10.53187ms
[TRT] Total CPU 54.01872ms CUDA 54.19614ms
[TRT] ------------------------------------------------
detected 1 objects in image
<detectNet.Detection object>
-- ClassID: 1
-- Confidence: 0.921387
-- Left: 0
-- Top: 8.96484
-- Right: 1033.75
-- Bottom: 715.078
-- Width: 1033.75
-- Height: 706.113
-- Area: 729945
-- Center: (516.875, 362.021)
Xin is about to do something on the inference dectection results
OK, great! Just a word of warning - if you were to run cmake or make again at some point, it would overwrite your changes with the original copy from jetson-inference/python/examples. So you may want to re-name your edited version to something else or store it somewhere else.