Hello!
I’ve been trying to crate a custom dataset for digits for hours.
The command I have used for starting digits is:
sudo nvidia-docker run --name digits -d -p 8888:5000 -v /home/username/data:/home/username/data -v /home/username/digits-jobs:/workspace/jobs nvcr.io/nvidia/digits:18.05
The dataset which I am using is a subset of ILSVRC12 made following the steps from here:
<img src="https://github.com/dusty-nv/jetson-inference/raw/master/docs/images/deep-vision-header.jpg" width="100%">
<p align="right"><sup><a href="imagenet-camera.md">Back</a> | <a href="imagenet-snapshot.md">Next</a> | </sup><a href="../README.md#two-days-to-a-demo-digits"><sup>Contents</sup></a>
<br/>
<sup>Image Recognition</sup></p>
# Re-Training the Recognition Network
The existing GoogleNet and AlexNet models that are downloaded by the repo are pre-trained on [1000 classes of objects](../data/networks/ilsvrc12_synset_words.txt) from the ImageNet ILSVRC12 benchmark.
To recognize a new object class, you can use DIGITS to re-train the network on new data. You can also organize the existing classes differently, including group multiple subclasses into one. For example in this tutorial we'll take 230 of the 1000 classes, group those into 12 classes and retrain the network.
Let's start by downloading the ILSVRC12 images to work with, or you can substitute your own dataset in an **[Image Folder](https://github.com/NVIDIA/DIGITS/blob/master/docs/ImageFolderFormat.md)**.
### Downloading Image Recognition Dataset
An image recognition dataset consists of a large number of images sorted by their classification type (typically by directory). The ILSVRC12 dataset was used in the training of the default GoogleNet and AlexNet models. It's roughly 100GB in size and includes 1 million images over 1000 different classes. The dataset is downloaded to the DIGITS server using the [`imagenet-download.py`](../tools/imagenet-download.py) image crawler.
To download the dataset, first make sure you have enough disk space on your DIGITS server (120GB recommended), then run the following commands from a directory on that machine where you want the dataset stored:
``` bash
This file has been truncated. show original
My dataset is saved in /home/username/data and in the Training Images field I keep getting the ERROR: folder must contain at least two subdirectories message. I have to mention that I have given all the files 777 permission just so I could get over any possible issue and have been going and doing the same for all the files linked with symbolic links (the photos are actually linked with symbolic links made by the script in the given link and yes, the links are valid, I checked).
Any help would be greatly appreciated, thank you!
Hi. It’s 2022 and I’m facing this problem. Kindly advice what to do.
Regards