I am wondering whether the “DIGITS 4” development environment runs on the “Jetson TX1” platform?
I am planning to use the DetectNet deep neural network model provided by NVIDIA to perform vehicle detection using the Jetson TX1 platform. I would appreciate if you share any experiences you might have related to this experiment.
Hi Shervin, the DIGITS training system is not supported on ARM/Jetson and is meant to run from PC for training. Partially this is because the nvcaffe that DIGITS uses for training, on TX1 nvcaffe is optimized for FP16 inference and not training. So for training a network, run DIGITS in the cloud (like in AWS or Azure) or on a local x86 machine. With each training epoch, DIGITS will save a network model checkpoint, which you can copy over to your Jetson for deploying the inference. You can do this with DetectNet as well, after you get it trained in DIGITS to your liking, copy it over to your Jetson. There you can load it with TensorRT using example code like this.
I have been following the posts related to DIGITS training and inference on Jetson TX1. I understand that DIGITS can be set up only on Cloud or a computer with a GPU. I am going through the tutorial on the [url]https://github.com/dusty-nv/jetson-inference#system-setup[/url]. This explains the steps to install JetPack on the Jetson and while doing that, it would install the necessary CUDA toolkit on the machine which will be used to run DIGITS.
However, I have currently installed JetPack on TX1 with a local computer and would like to use google cloud VM Instance for setting up DIGITS. Are there any related posts for doing so? Please let me know.
Preparing to unpack /tmp/ml-repo.deb ...
Unpacking nvidia-machine-learning-repo-ubuntu1604 (1.0.0-1) over (1.0.0-1) ...
Setting up nvidia-machine-learning-repo-ubuntu1604 (1.0.0-1) ...
gpg: no valid OpenPGP data found.
Failed to add GPGKEY at http://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1604/x86_64/7fa2af80.pub to apt keys.
Not sure how to resolve this. Currently working on trying to find a solution for this, but if someone reads it, please direct me to some resources.
For Ubuntu it looks like the CUDA downloads currently support 16.04 or 17.04. Does your cloud provider support 16.04? If you still have trouble installing the DEB package, you can try the runfile instead to install directly:
On another note, I have setup DIGITS 6 on the cloud and I was trying to train a model on a custom dataset. Every time I create a model, it would fail with Error code -6. After peeking into the log file, I saw the following error message:
I0929 00:37:08.050968 9779 layer_factory.hpp:77] Creating layer cluster
[libprotobuf FATAL google/protobuf/stubs/common.cc:61] This program requires version 3.2.0 of the Protocol Buffer runtime library, but the installed version is 2.6.1. Please update your library. If you compiled the program yourself, make sure that your headers are from the same version of Protocol Buffers as your link-time library. (Version verification failed in "google/protobuf/descriptor.pb.cc".)
terminate called after throwing an instance of 'google::protobuf::FatalException'
what(): This program requires version 3.2.0 of the Protocol Buffer runtime library, but the installed version is 2.6.1. Please update your library. If you compiled the program yourself, make sure that your headers are from the same version of Protocol Buffers as your link-time library. (Version verification failed in "google/protobuf/descriptor.pb.cc".)
*** Aborted at 1506645428 (unix time) try "date -d @1506645428" if you are using GNU date ***
PC: @ 0x7fda5d10b428 gsignal
*** SIGABRT (@0x3ea00002633) received by PID 9779 (TID 0x7fda5f433ac0) from PID 9779; stack trace: ***
I installed protobuf compiler as mentioned in the installation steps. However, when I run
Not sure which step went wrong.
Maybe, if I install the protobuf 3.2 from the source, and then rebuild caffe? Would try that out but hopefully if someone reads this and suggests, I could save some time from futile attempts.