Live Camera for mobilenet-ssd model, if a persion detected, send a counter metric message to AWS cloudwatch?

Hi all,

I’m following tutorial to retrain mobilenet-ssd, jetson-inference/ at master · dusty-nv/jetson-inference · GitHub.

And for step " Running the Live Camera Program"

detectnet --model=models/fruit/ssd-mobilenet.onnx --labels=models/fruit/labels.txt \
          --input-blob=input_0 --output-cvg=scores --output-bbox=boxes \

Is it possible to add some side actions, for example, if from the live camera, if a person detected, send a counter metric message to AWS-Cloudwatch?

I’m a newbie on this topic and please excuse me for this naive question. (I’m not even sure if this is a valid question…)
Thank you very much!

I searched around but seems not such a similar question as mine
(and new user can only put in one link??? :cry:)

  • https ://
  • https ://
  • https ://
  • https ://

I’ve also checked the Jetbot where I believe there is some inference result triggered action, for sending a metric message to AWS-Cloudwatch, I can do something similar to update() method here?

def update(change):
    global blocked_slider, robot
    x = change['new'] 
    x = preprocess(x)
    y = model(x)
    # we apply the `softmax` function to normalize the output vector so it sums to 1 (which makes it a probability distribution)
    y = F.softmax(y, dim=1)
    prob_blocked = float(y.flatten()[0])
    blocked_slider.value = prob_blocked
    if prob_blocked < 0.5:
update({'new': camera.value})  # we call the function once to initialize

as in jetbot/live_demo.ipynb at master · NVIDIA-AI-IOT/jetbot · GitHub

Isn’t CloudWatch for monitoring your AWS resources? Maybe you want to run a Lambda function? I would start by writing the logic for that, whatever it is, in python.

Then add your function to the main loop of detectNet sample.
jetson-inference/ at master · dusty-nv/jetson-inference (

Maybe DeepStream is a better tool for the job? How to integrate NVIDIA DeepStream on Jetson Modules with AWS IoT Core and AWS IoT Greengrass | The Internet of Things on AWS – Official Blog (

1 Like

Thank you for your prompt reply.

And it’s exactly what I’m looking for, adding my function (either aws-cloudwatch or a lambda call) to the main loop of detectNet sample.

But it seems only supporting pre-trained models?

from jetson_inference import detectNet

parser.add_argument("--network", type=str, default="ssd-mobilenet-v2", help="pre-trained model to load (see below for options)")

# load the object detection network
net = detectNet(, sys.argv, args.threshold)

So to plugin my re-trained model to detectNet as parameter, after re-training and converting to onnx format python3 --model-dir=models/fruit, I can give my re-trained onnx model as parameter:

# note: to hard-code the paths to load a model, the following API can be used:
net = detectNet(model="model/ssd-mobilenet.onnx", labels="model/labels.txt", 
                input_blob="input_0", output_cvg="scores", output_bbox="boxes", 

Is this the right understanding?

Thank you!

Hi @renxin.ubc, yes that is correct, can run either pre-trained or custom models (that you trained with can load your custom models either using the hard-coded function you have above, or with command-line syntax like shown here.

Then you can add your own actions/triggers inside the main loop like this:

while True:
	# capture the next image
	img = input.Capture()

	# detect objects in the image (with overlay)
	detections = net.Detect(img, overlay=args.overlay)

	for detection in detections:
		if net.GetClassDesc(detection.ClassID) == 'person':
			# perform a custom action
			print('detected a person!')
1 Like

Thank you Dustin!

I tried my self, and it worked.

I modified in build folder with path ~/jetson-inference/build/aarch64/bin:

  1. input/output, from original reading from input parameters to hard-coded device path of my jetson-nano setup
  2. add print line print("Xin is about to do something on the inference dectection results") after inference done
# create video sources and outputs
#input = videoSource(args.input_URI, argv=sys.argv)
#output = videoOutput(args.output_URI, argv=sys.argv+is_headless)

input = videoSource("/dev/video0")
output = videoOutput("display://0")


	# detect objects in the image (with overlay)
	detections = net.Detect(img, overlay=args.overlay)

	for detection in detections:

	print("Xin is about to do something on the inference dectection results")

        # render the image

And then when I run it, I got the print line I added

xin@xin-desktop:~/jetson-inference/build/aarch64/bin$ ./
[TRT]    ------------------------------------------------
[TRT]    Timing Report networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff
[TRT]    ------------------------------------------------
[TRT]    Pre-Process   CPU   0.07172ms  CUDA   0.83979ms
[TRT]    Network       CPU  53.65486ms  CUDA  42.77724ms
[TRT]    Post-Process  CPU   0.04636ms  CUDA   0.04724ms
[TRT]    Visualize     CPU   0.24578ms  CUDA  10.53187ms
[TRT]    Total         CPU  54.01872ms  CUDA  54.19614ms
[TRT]    ------------------------------------------------

detected 1 objects in image
<detectNet.Detection object>
   -- ClassID: 1
   -- Confidence: 0.921387
   -- Left:    0
   -- Top:     8.96484
   -- Right:   1033.75
   -- Bottom:  715.078
   -- Width:   1033.75
   -- Height:  706.113
   -- Area:    729945
   -- Center:  (516.875, 362.021)
Xin is about to do something on the inference dectection results

OK, great! Just a word of warning - if you were to run cmake or make again at some point, it would overwrite your changes with the original copy from jetson-inference/python/examples. So you may want to re-name your edited version to something else or store it somewhere else.

1 Like

sure will do that, thank you Dustin and all the best to you :)