This file has been truncated. show original
<img src="https://github.com/dusty-nv/jetson-inference/raw/master/docs/images/deep-vision-header.jpg" width="100%">
<p align="right"><sup><a href="pytorch-plants.md">Back</a> | <a href="../README.md#hello-ai-world">Next</a> | </sup><a href="../README.md#hello-ai-world"><sup>Contents</sup></a>
<sup>Transfer Learning - Object Detection</sup></s></p>
# Collecting your own Detection Datasets
The previously used `camera-capture` tool can also label object detection datasets from live video:
<img src="https://github.com/dusty-nv/jetson-inference/raw/dev/docs/images/pytorch-collection-detect.jpg" >
When the `Dataset Type` drop-down is in Detection mode, the tool creates datasets in [Pascal VOC](http://host.robots.ox.ac.uk/pascal/VOC/) format (which is supported during training).
> **note:** if you want to label a set of images that you already have (as opposed to capturing them from camera), try using a tool like [`LabelImg`](https://github.com/tzutalin/labelImg) that also saves in Pascal VOC format. If you need to label a video file, dump the video frames to images first.
## Creating the Label File
First, create an empty directory for storing your dataset and a text file that will define the class labels (usually called `labels.txt`). The label file contains one class label per line, for example: