Hi!
I have a working facenet engine file. Now, I want to write a script in python with the following functions:
Read in a face.
Using the engine model of facenet, the picture is encoded to obtain 512 dimensional data.
How can I call the engine file directly in Python and perform the above functions?
Can you give examples of Python and C + +, thank you
Hi
You can find below a detection example:
<img src="https://github.com/dusty-nv/jetson-inference/raw/master/docs/images/deep-vision-header.jpg" width="100%">
<p align="right"><sup><a href="imagenet-camera-2.md">Back</a> | <a href="detectnet-camera-2.md">Next</a> | </sup><a href="../README.md#hello-ai-world"><sup>Contents</sup></a>
<br/>
<sup>Object Detection</sup></s></p>
# Locating Objects with DetectNet
The previous recognition examples output class probabilities representing the entire input image. Next we're going to focus on **object detection**, and finding where in the frame various objects are located by extracting their bounding boxes. Unlike image classification, object detection networks are capable of detecting many different objects per frame.
<img src="https://github.com/dusty-nv/jetson-inference/raw/dev/docs/images/detectnet.jpg" >
The [`detectNet`](../c/detectNet.h) object accepts an image as input, and outputs a list of coordinates of the detected bounding boxes along with their classes and confidence values. [`detectNet`](../c/detectNet.h) is available to use from [Python](https://rawgit.com/dusty-nv/jetson-inference/python/docs/html/python/jetson.inference.html#detectNet) and [C++](../c/detectNet.h). See below for various [pre-trained detection models](#pre-trained-detection-models-available) available for download. The default model used is a [91-class](../data/networks/ssd_coco_labels.txt) SSD-Mobilenet-v2 model trained on the MS COCO dataset, which achieves realtime inferencing performance on Jetson with TensorRT.
As examples of using the `detectNet` class, we provide sample programs for C++ and Python:
- [`detectnet.cpp`](../examples/detectnet/detectnet.cpp) (C++)
- [`detectnet.py`](../python/examples/detectnet.py) (Python)
These samples are able to detect objects in images, videos, and camera feeds. For more info about the various types of input/output streams supported, see the [Camera Streaming and Multimedia](aux-streaming.md) page.
### Detecting Objects from Images
This file has been truncated. show original
Thanks.
Hi!
I looked at the example you gave, but it’s not what I want. I want to use the Python Api given in this link to parse and use the model.But I can’t write parsing code, so can you give me an example? thank
Hi,
Please check the example below:
for binding, shape in shapes.items():
context.set_binding_shape(engine[binding] + binding_idx_offset, shape)
assert context.all_binding_shapes_specified
# Inference
total_time = 0
start = cuda.Event()
end = cuda.Event()
stream = cuda.Stream()
# Warmup
for _ in range(args.warm_up_runs):
context.execute_async_v2(bindings=bindings, stream_handle=stream.handle)
stream.synchronize()
# Timing loop
times = []
for _ in range(args.iterations):
start.record(stream)
context.execute_async_v2(bindings=bindings, stream_handle=stream.handle)
end.record(stream)
After you inference the model with a given input, the buffer can be copied back to the CPU with memcpy_dtoh_async(.)
.
Please note that you can mark the expected output layer when you convert the model into a TensorRT engine.
Thanks.
system
Closed
December 22, 2021, 6:03am
9
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.