DIGITS: Deep Learning GPU Training System

I have been working with Ubuntu 14.04 ...it's not user friendly and reminds me of DOS

Allison...it also seems like GPUs are used for processing but no one talks about why other than they are faster than CPUs but faster for what? Processing images? I'd be only interested in image processing for learning purposes not DL projects.
What kind of DL work are you doing on Windows OS?

If you need to know what GPUs are good at, I recommend first reading this page and following the links that interest you: https://developer.nvidia.co... You might also just click around on this blog and look at the wide variety of articles covering many different applications of GPUs.

I've read everything I can get my hands on concerning Nvidia, Digits Devbox, GPUs and deep learning including blogs. There is NOT a single source of truth as there are as many opinions on the subject as there are blogs on the topic. Thanks for the link Mark...I am familiar with CUDA as well...all represent graphical representations of models. One does not need to work with pixel processing to do ML and in fact this kind of study belongs to modeling and not ML I feel.

Hi, there. I'm trying to build a detector using trained model. I downloaded the mean.binary, deploy.prototype and .caffemodel file of the last epoch and feed them into pycaffe. However, I'm not able to get the same high validation accuracy from net.forward_all function (99.6% from Digits, 89.6% from Python). Before the classification, I subtract the mean image from the input image. Has anyone done this before? Is there any step I forget? I really appreciate your help!!!
BTW, I used the trained model from pycaffe and repeat the same validation procedure. The accuracy of pycaffe trained model matched with those displayed in the trained interface.

Can you just use digits for monitoring GPU usage?

Sorry, your question is unclear. Do you mean "use digits only to monitor GPU utilization"? If so, that seems like overkill. You could use nvidia-smi or NVML for that. https://developer.nvidia.co...

Hi we have received our DIGITS on 07/13 but it doesn't plot the accuracy and loss values in the top chart neither the GPU usage and other stats. Would you please advice?

Where did you download DIGITS? DIGITS 1 required the internet to plot and did not display GPU usage. Are you using this version by chance?

Did you try using the classification example under DIGITS_ROOT/examples/classification?

We just received the hardware and we are using the one already installed on it. It must be DIGITS 1 I guess. The machine was not connected to internet while we ran the first test, so as you mentioned this might be the problem.

Your DIGITS DevBox was likely delivered with DIGITS 1 installed. Let me know if the plotting does not work when you are connected to the internet. If you want to upgrade to DIGITS 2, you can download it here, https://developer.nvidia.co.... Let me know if you have any issues. If you want to keep DIGITS 1 running, you can run DIGITS 2 on a different port, ./digits-devserver -p 5001

It woks after we connect it to the internet. Thanks for the help.
We're gonna upgrade to DIGITS 2 later this week. Does DIGITS 1 include multiple GPU processing capability?

DIGITS 1 support multiple GPUs, you can perform one training per GPU. With DIGITS 2, you can perform a single training on more than one GPU.

Hi, I would like to checkout the images after dataset is created. could you help me how to do that?

Sorry I don't have a script to share with you. It looks like someone asked a similar question on the Caffe-users google group - https://groups.google.com/f.... One of the responses includes an example script to help you get started. For more information on lmdb calls, I recommend looking at their documentation -http://lmdb.readthedocs.org...

Thank you for the reply, Allison. Those links are helpful, I wonder if this can be achieved directly through web interface. It's an instant help.

Fine-tuning on DIGITS seems not working.

I renamed the layers that are different (let say in Alexnet you rename the last FC layer to 'fc8-new'). I didn't observe any difference in convergence rate and accuracy compare to when there is no fine-tuning. Would you please advice?

I have not had any issues with fine-tuning in DIGITS. When I fine-tune, I am showing the trained network new data, is this what you are doing? Also, the few times I have done this I found the accuracy of my fine tuned network to be comparable to what is was before. The main difference is that it is now tuned for accurate classification with new categories. Can you tell me a little more about what you are trying to do?

That is a very nice project! It works great. The only problem I had was when I tried to classify many images. I uploaded a list of images but the system crashes saying that "Input must have 4 axes, corresponding to (num, channels, height, width)". I'trying to classify the same images I used for training.