Hello!
I’m a beginner when it comes to the Jetson Nano. :-)
Is there any working example in Python, where I can run an exported model from Azure Custom Vision on my Jetson Nano (Tensorflow or Onnx), object detection with webcam.
Running latest JetPack, and have installed latest Tensorflow.
Best regards!
/Jonas
Hi,
Please check if below tutorial can meet your requirement:
<img src="https://github.com/dusty-nv/jetson-inference/raw/master/docs/images/deep-vision-header.jpg" width="100%">
<p align="right"><sup><a href="imagenet-camera-2.md">Back</a> | <a href="detectnet-camera-2.md">Next</a> | </sup><a href="../README.md#hello-ai-world"><sup>Contents</sup></a>
<br/>
<sup>Object Detection</sup></s></p>
# Locating Objects with DetectNet
The previous recognition examples output class probabilities representing the entire input image. Next we're going to focus on **object detection**, and finding where in the frame various objects are located by extracting their bounding boxes. Unlike image classification, object detection networks are capable of detecting many different objects per frame.
<img src="https://github.com/dusty-nv/jetson-inference/raw/dev/docs/images/detectnet.jpg" >
The [`detectNet`](../c/detectNet.h) object accepts an image as input, and outputs a list of coordinates of the detected bounding boxes along with their classes and confidence values. [`detectNet`](../c/detectNet.h) is available to use from [Python](https://rawgit.com/dusty-nv/jetson-inference/python/docs/html/python/jetson.inference.html#detectNet) and [C++](../c/detectNet.h). See below for various [pre-trained detection models](#pre-trained-detection-models-available) available for download. The default model used is a [91-class](../data/networks/ssd_coco_labels.txt) SSD-Mobilenet-v2 model trained on the MS COCO dataset, which achieves realtime inferencing performance on Jetson with TensorRT.
As examples of using the `detectNet` class, we provide sample programs for C++ and Python:
- [`detectnet.cpp`](../examples/detectnet/detectnet.cpp) (C++)
- [`detectnet.py`](../python/examples/detectnet.py) (Python)
These samples are able to detect objects in images, videos, and camera feeds. For more info about the various types of input/output streams supported, see the [Camera Streaming and Multimedia](aux-streaming.md) page.
### Detecting Objects from Images
This file has been truncated. show original
Thanks.
Hi again!
I used the sample above with my onnx model exported från Azure Custom Vision, but I then get this error:
“Network has dynamic or shape inputs but no optimization profile has been defined”
/Jonas
Hi,
Please run the trtexec with the input dimension information.
For example:
$ /usr/src/tensorrt/bin/trtexec --onnx=[your/model] --explicitBatch --optShapes=[name]:[NxCxHxW] --verbose
$ /usr/src/tensorrt/bin/trtexec --onnx=resnet10_dynamic_batch.onnx --explicitBatch --optShapes=data:1x3x368x640 --verbose
Thanks.