Description
Hi, we installed a brand new Xavier NX using DS5. We are doing tests using Yolo3 model. How can we turn off all the detection class and only use one for processing. Let’s say if we only want to detect books, in which files config file do we change and choose only one class to be processed and not all of them. The reason is, when processing video streams, by default it is detecting all 80 classes and makes the Xavier’s fps performance drop to 6fps.
Thanks for any feedback.
A clear and concise description of the bug or issue.
Environment
Xavier NX, DS5, Jetpack 4.4
TensorRT Version : 7
GPU Type : Xavier NX
Nvidia Driver Version : Latest
CUDA Version :
CUDNN Version :
Operating System + Version : Ubuntu
Python Version (if applicable) :
TensorFlow Version (if applicable) :
PyTorch Version (if applicable) :
Baremetal or Container (if container which image + tag) :
Relevant Files
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
Steps To Reproduce
Please include:
Exact steps/commands to build your repro
Exact steps/commands to run your repro
Full traceback of errors encountered
I doesn’t seems to be TensorRT issue.
I found few links which might be useful:
opened 04:13PM - 20 Feb 19 UTC
closed 12:24AM - 12 Nov 20 UTC
tutorial
Stale
This guide explains how to train your own **single-class dataset** with YOLOv3.
…
## Before You Start
1. Update (Python >= 3.7, PyTorch >= 1.3, etc.) and install [requirements.txt](https://github.com/ultralytics/yolov3/blob/master/requirements.txt) dependencies.
2. Clone repo: `git clone https://github.com/ultralytics/yolov3`
3. Download [COCO](http://cocodataset.org/#home): `bash yolov3/data/get_coco2017.sh`
## Train On Custom Data
**1. Label your data in Darknet format.** After using a tool like [Labelbox](https://labelbox.com/) to label your images, you'll need to export your data to darknet format. Your data should follow the example created by `get_coco2017.sh`, with images and labels in separate parallel folders, and one label file per image (if no objects in image, no label file is required). The label file specifications are:
- One row per object
- Each row is `class x_center y_center width height` format.
- Box coordinates must be in **normalized xywh** format (from 0 - 1). If your boxes are in pixels, divide `x_center` and `width` by image width, and `y_center` and `height` by image height.
- Class numbers are zero-indexed (start from 0).
Each image's label file must be locatable by simply replacing `/images/*.jpg` with `/labels/*.txt` in its pathname. An example image and label pair would be:
```bash
../coco/images/train2017/000000109622.jpg # image
../coco/labels/train2017/000000109622.txt # label
```
An example label file with 4 persons (all class `0`):
<img width="474" alt="screenshot 2019-02-20 at 17 05 23" src="https://user-images.githubusercontent.com/26833433/53105599-ba7dff80-3531-11e9-8860-10a48872b043.png">
**2. Create train and test `*.txt` files.** Here we create `data/coco_1cls.txt`, which contains 5 images with only persons from the coco 2014 trainval dataset. We will use this small dataset for both training and testing. Each row contains a path to an image, and remember one label must also exist in a corresponding `/labels` folder for each image that has targets.
<img width="535" alt="Screenshot 2019-04-07 at 13 50 06" src="https://user-images.githubusercontent.com/26833433/55683093-1e228780-593c-11e9-8751-308cdc2c3cdc.png">
**3. Create new `*.names file`** listing all of the names for the classes in our dataset. Here we use the existing `data/coco.names` file. Classes are **zero indexed**, so `person` is class `0`.
<img width="519" alt="screenshot 2019-02-20 at 16 50 30" src="https://user-images.githubusercontent.com/26833433/53104447-a9cc8a00-352f-11e9-9c2b-d5b2cc494f96.png">
**4. Update `data/coco.data`** lines 2 and 3 to point to our new text file for training and validation (in your own data you would likely want to use separate train and test sets). Also update line 1 to our new class count, if not 80, and lastly update line 4 to point to our new `*.names` file, if you created one. Save the modified file as `data/coco_1cls.data`.
<img width="444" alt="Screenshot 2019-04-07 at 13 48 48" src="https://user-images.githubusercontent.com/26833433/55683084-f9c6ab00-593b-11e9-877d-9003afa44aa1.png">
**5. Update `*.cfg` file** (optional). Each YOLO layer has 255 outputs: 85 outputs per anchor [4 box coordinates + 1 object confidence + 80 class confidences], times 3 anchors. If you use fewer classes, reduce filters to `filters=[4 + 1 + n] * 3`, where `n` is your class count. This modification should be made to the layer preceding each of the 3 YOLO layers. Also modify `classes=80` to `classes=n` in each YOLO layer, where `n` is your class count (for single class training, `n=1`).
<img width="723" alt="screenshot 2019-02-21 at 19 40 01" src="https://user-images.githubusercontent.com/26833433/66830924-e03f9500-ef56-11e9-9d09-97f9921cab39.png">
**6. (OPTIONAL) Update hyperparameters** such as LR, LR scheduler, optimizer, augmentation settings, multi_scale settings, etc in `train.py` for your particular task. We recommend you start with all-default settings first updating anything.
**7. Train.** Run `python3 train.py --data data/coco_1cls.data` to train using your custom data. If you created a custom `*.cfg` file as well, specify it using `--cfg cfg/my_new_file.cfg`.
## Visualize Results
Run `from utils import utils; utils.plot_results()` to see your training losses and performance metrics vs epoch. If you don't see acceptable performance, try hyperparameter tuning and re-training. Multiple `results.txt` files are overlaid automatically to compare performance.
Here we see results from training on `coco_1cls.data` using the default `yolov3-spp.cfg` and also a single-class `yolov3-spp-1cls.cfg`, available in the `data/` and `cfg/` folders.
![results (2)](https://user-images.githubusercontent.com/26833433/68169987-f1127380-ff22-11e9-9cfa-24bf878d8850.png)
Evaluate your trained model: copy `COCO_val2014_000000001464.jpg` to `data/samples` folder and run `python3 detect.py --weights weights/last.pt`
![coco_val2014_000000001464](https://user-images.githubusercontent.com/26833433/53104219-42aed580-352f-11e9-9be5-60f84ab05dc1.jpg)
## Reproduce Our Results
To reproduce this tutorial, simply run the following code. This trains all the various [tutorials](https://github.com/ultralytics/yolov3/wiki), saves each results*.txt file separately, and plots them together as `results.png`. It all takes less than 30 minutes on a 2080Ti.
```bash
git clone https://github.com/ultralytics/yolov3
python3 -c "from yolov3.utils.google_utils import gdrive_download; gdrive_download('1h0Id-7GUyuAmyc9Pwo2c3IZ17uExPvOA','coco2017demos.zip')" # datasets (20 Mb)
cd yolov3
python3 train.py --data coco64.data --batch 16 --accum 1 --epochs 300 --nosave --cache --weights '' --name from_scratch
python3 train.py --data coco64.data --batch 16 --accum 1 --epochs 300 --nosave --cache --weights yolov3-spp-ultralytics.pt --name from_yolov3-spp-ultralytics
python3 train.py --data coco64.data --batch 16 --accum 1 --epochs 300 --nosave --cache --weights darknet53.conv.74 --name from_darknet53.conv.74
python3 train.py --data coco1.data --batch 1 --accum 1 --epochs 300 --nosave --cache --weights darknet53.conv.74 --name 1img
python3 train.py --data coco1cls.data --batch 16 --accum 1 --epochs 300 --nosave --cache --weights darknet53.conv.74 --cfg yolov3-spp-1cls.cfg --name 1cls
```
## Reproduce Our Environment
To access an up-to-date working environment (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled), consider a:
- **GCP** Deep Learning VM with $300 free credit offer: See our [GCP Quickstart Guide](https://github.com/ultralytics/yolov3/wiki/GCP-Quickstart)
- **Google Colab Notebook** with 12 hours of free GPU time: [Google Colab Notebook](https://colab.research.google.com/drive/1G8T-VFxQkjDe4idzN8F-hbIBqkkkQnxw)
- **Docker Image** from https://hub.docker.com/r/ultralytics/yolov3. See [Docker Quickstart Guide](https://github.com/ultralytics/yolov3/wiki/Docker-Quickstart)
opened 10:17AM - 15 Aug 17 UTC
Yolo works straight out of the box perfectly for my needs, and identifies correc… tly the object in the photos that I've tested it with. I only want to test for one specific object though, I don't care if there's chairs or sheep or cars or trees in the photo as well. It should be possible to tell darknet when running it to look for one specific object.
Maybe it is and I'm being blind/stupid?
Thank you!
In case of further query, request you to raise issue in Yolo3 forum/git issues section.
Thanks