DIGITS 2.0 foot in the door Noob question

Sorry for the very simple question…
What should my input data set directory look like?
I have run the example, cool, it works.
What should my directory structure look like?
What addition files do I need to create?
The example has separate directories an a few index files.
They look really simple but I would love to be able to read a spec of what they need to be.

Your input data set should be in a directory. Within that directory there should be sub-directories for each “category” of classification. Within each subdirectory should be image files (e.g. jpg or png) that are used for training of that subdirectory, i.e. for training i.e. learning of that classification category.

So, for example, the MNIST data base contains sub-directories from 0 to 9, each corresponding to one of the categories to be “learned”. Within each subdirectory are individual image files that are used for training i.e. “learning” of that category.

Thanks, nice and simple. I was put off by indexes etc but I will just ignore that issue and see how far I can get.

Wow! Tried the Caltech 101 data set using googLeNet/caffe on a Geforce 980 and within 30 minutes it was kind of guessing every other test image from random image googling. I removed the “background” images. I tried to continue reusing the trained model but got an error, so started training again.

If i have under stood correctly a pre trained model can be used with a different number of outputs but it needs editing. Is this hard to do? It would be nice to add/remove categories without going back to square one.

Yes, it can be done. There is an option in digits to start with a pretrained model.

You would then use the model editor to add neurons to the output layer (at least - you might want to make other layer/network changes) and then start training with the new data set from there. To do this correctly you have to have some understanding of Caffe prototext and how to express neurons/layers in it. After you are done editing the prototext, you can visualize the result to see if it looks correct - there is a button to visualize the prototext in graph form.

For questions like this, you might want to:

  1. read the parallelforall blog post:

http://devblogs.nvidia.com/parallelforall/digits-deep-learning-gpu-training-system/

  1. join the google groups for digits-users:

https://groups.google.com/forum/#!forum/digits-users

  1. look at some recent postings on this topic, such as this one:

https://groups.google.com/forum/#!topic/digits-users/K48VP51NbqY

Thanks.

The following was a good intro too.

NVIDIA Deep Learning Course: Class #2 - Getting Started with DIGITS https://www.youtube.com/watch?v=jUiudfxjdr8

I’ll give this editing a go as I think I will want to train with a initial large list of categories and then I will probably want to add more.

I wondered if I could use the pre defined model on the new, slightly larger, data set and then inspect this model using the visualisation and copy the differences across. Not fully understood the relationship between the net design and the training data. (How they are propagated , behind the scenes, from one training run to he next.) The visualisation looks good way to try and understand the model.

Thanks for this. I need to build a data set first and then give this all a go.