I have follow the tutorial : Collecting your own Detection Datasets.
And I wonder how can we use picture (without already determined box of detection) or should we use only the device’s video stream of a camera?
May I know which Jetson platform and JetPack version you used?
I use a Jetson NX and the last version of JetPack (4.5.1)
Hi,
You can use pictures but please create the label file with the rule mentioned below:
<img src="https://github.com/dusty-nv/jetson-inference/raw/master/docs/images/deep-vision-header.jpg" width="100%">
<p align="right"><sup><a href="pytorch-plants.md">Back</a> | <a href="../README.md#hello-ai-world">Next</a> | </sup><a href="../README.md#hello-ai-world"><sup>Contents</sup></a>
<br/>
<sup>Transfer Learning - Object Detection</sup></s></p>
# Collecting your own Detection Datasets
The previously used `camera-capture` tool can also label object detection datasets from live video:
<img src="https://github.com/dusty-nv/jetson-inference/raw/dev/docs/images/pytorch-collection-detect.jpg" >
When the `Dataset Type` drop-down is in Detection mode, the tool creates datasets in [Pascal VOC](http://host.robots.ox.ac.uk/pascal/VOC/) format (which is supported during training).
> **note:** if you want to label a set of images that you already have (as opposed to capturing them from camera), try using a tool like [`CVAT`](https://github.com/openvinotoolkit/cvat) and export the dataset in Pascal VOC format. Then create a labels.txt in the dataset with the names of each of your object classes.
## Creating the Label File
Under `jetson-inference/python/training/detection/ssd/data`, create an empty directory for storing your dataset and a text file that will define the class labels (usually called `labels.txt`). The label file contains one class label per line, for example:
``` bash
This file has been truncated. show original
Thanks.
If you already have the pictures, you can use CVAT tool to annotate them. Then export them in Pascal VOC format and as Aasta mentioned, create a labels.txt file for it.