Hello,
I would like to control an actuator system and take parallel pictures when detecting a model that has already been specially trained. How can I implement this.
For example, the following application would be possible: As soon as a water bottle is visible in the image, take a photo, save this photo and output a signal via an LED. This storing of the photo and the output of the LED signal should take place simultaneously. How can I implement this?
Nvidia Jetson TX2.
Hi markusnvidia,
That project sounds interesting, you will require an inference engine that detects the bottles, then after each detection a signal should be sent to take the snapshot and blink the led.
We have an open source project called GstInference which handles object detection (TinyYOLOV2 and TinyYoloV3). We created a GStreamer plugin that activates signals after each object detected.
This can be modified to exclusively detect bottles.
This is the link to the project’s page: https://developer.ridgerun.com/wiki/index.php?title=GstInference/Introduction
This is an possible implementation for your case.
Greivin F.
Hi,
You can also refer below link in case it helps:
Thanks
Hello,
is there an oppurtunity to take pictures via Python with gstCamera?
Thanks. But how can I store this image on my drive within the python code?
Thanks, it works.
Now I have the problem that only one picture is saved at a time. Every further image overwrites the already existing one, because I have defined a fixed name for the image. Can I keep the name of the image variable. So if the name already exists, that another file name is chosen?
Thanks.
Yes, that should work. You can even add timestamp to output file name to avoid overwrite issue.
Thanks
How can I add timestamp to output file?
Thanks. It works. I´ve another question.
Is there an oppurtunity to save the coordinates of the detected objects (maybe in a .txt format)?
Hi,
Please refer to below detectNet links:
<img src="https://github.com/dusty-nv/jetson-inference/raw/master/docs/images/deep-vision-header.jpg">
<p align="right"><sup><a href="detectnet-console-2.md">Back</a> | <a href="detectnet-example-2.md">Next</a> | </sup><a href="../README.md#hello-ai-world"><sup>Contents</sup></a>
<br/>
<sup>Object Detection</sup></p>
# Running the Live Camera Detection Demo
Up next we have a realtime object detection camera demo available for C++ and Python:
- [`detectnet-camera.cpp`](../examples/detectnet-camera/detectnet-camera.cpp) (C++)
- [`detectnet-camera.py`](../python/examples/detectnet-camera.py) (Python)
Similar to the previous [`detectnet-console`](detectnet-console-2.md) example, these camera applications use detection networks, except that they process a live video feed from a camera. `detectnet-camera` accepts various **optional** command-line parameters, including:
- `--network` flag which changes the [detection model](detectnet-console-2.md#pre-trained-detection-models-available) being used (the default is SSD-Mobilenet-v2).
- `--overlay` flag which can be comma-separated combinations of `box`, `labels`, `conf`, and `none`
- The default is `--overlay=box,labels,conf` which displays boxes, labels, and confidence values
- `--alpha` value which sets the alpha blending value used during overlay (the default is `120`).
- `--threshold` value which sets the minimum threshold for detection (the default is `0.5`).
- `--camera` flag setting the camera device to use
This file has been truncated. show original
https://rawgit.com/dusty-nv/jetson-inference/python/docs/html/python/jetson.inference.html#detectNet
Thanks
Thank you. I have trained my own models with DIGITS.
I cannot use them in this code:
net = jetson.inference.detectNet("my-own-model", threshold=0.5)
This is only for pretrained models. How can I use my own model in my “own code”?
How can I use my own model in this line?
net = jetson.inference.detectNet(opt.network, sys.argv, opt.threshold)
These are the files, I have in my own created model directory:
I want to take a photo when I detect an object, how can I do it?
#!/usr/bin/env python3
#
# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
This file has been truncated. show original
I use docker and my usb camera (/dev/video0)
How do i combine my coding for image detection with led signal?