Need post-flash instructions for Jetson TX1

I have an existing Tensorflow model that I have been running on an Intel i7 platform with a Titan XP GPU. I would like to benchmark this 6-layer CNN model on the Jetson TX1 to see what the performance loss will be on an ARM platform. I have used the JetPack download to flash the full 64 bit image onto the Jetson development board, but now I am stuck. Here are some questions:

  1. Before flashing with JetPack, my Jetson board login & password were both ubuntu. After Jetpack flash, the login and password are now nvidia. It appears that I have two users under the home directory now – the original ubuntu, and the new one – nvidia. Is this correct? In most of your tutorials, you imply that the UN/PW is ubuntu/ubuntu. There is no reference to the nvidia user.
  2. Do I have to manually install packages such as CUDAnn, TensorRT, etc. now? If so, where are they located, and what commands should I use?
  3. What specific steps would I take to run my tensorflow model on the Jetson platform? I understand that TensorRT is designed to do inference on the Jetson, but how do I go about executing TensorRT?
  4. How is CUDAnn invoked? Is that automatically taken care of by TensorRT?

In general, it seems like you have very good documentation about flashing the development board, and you also have a lot of high-level tutorials about applications that could be implemented on the board. But it seems like you are missing the series of concrete steps that it takes to run the tools once the board’s serial flash has been programmed.

Thanks for your help.

The nvidia and ubuntu accounts are both “admin” accounts…they can both use “sudo”. There does not seem to be much difference between the accounts, you should be able to log in to either account and do anything the ubuntu account could do in the past regardless of which account you logged in with. Addition of the nvidia account did not remove the ubuntu account…it’s still there.

The additional packages were intended to be installed via JetPack running on an Ubuntu host (beware a VM host has issues…some people have figured out ways around those issues, but you will need to prepare and do some work if you’re not using a native Ubuntu host). The sample programs are one part of what JetPack can install…JetPack was intended to be able to install those packages (and sample programs) on both the host and the Jetson…just check the appropriate boxes (FYI, JetPack can be run at any time and does not need to do a flash unless you want it to). JetPack host support is officially Ubuntu 14.04, but except for some parts of host software an Ubuntu 16.04 host is known to work.

Thank you for your reply.

I am running a linux host with Ubuntu 14.04, and am NOT using a VM on a windows machine.

That linux host machine is what I used to install and flash the lt4 package.

Are you saying that, after doing the flash image, I then need to do additional steps on my linux host machine to get the software tools running on the Jetson board? If so, what are those steps? I’m still not clear as to exactly what I would do to be able to run my existing tensorflow model (as inference) on my Jetson TX1 development board. Is there a document that has a list of steps needed for installing/running TensorRT, Cuda, etc.? It’s not clear to me how to run them, how they play together, etc. I found one set of instructions that involved installing an Nvidia-375 driver on my host machine but after installing that drive and rebooting, my Ubuntu 14/04 desktop was frozen. Fortunately, I found a way to remove that driver and recover the OS, so I am back to square 0, but still not sure how to proceed.

The NVIDIA guide for downloading Jetpack & flashing the Jetson board image is very clear with step-by-step instructions. It would be great if there were similar step-by-step instructions on how to take an existing tensorflow model & get it running on the TX1.

I have a Fedora host, so I can’t verify details (Ubuntu can’t be installed on my system without destroying existing installs due to bugs in the Ubuntu installer). But the gist is that JetPack can act as a front end to the flash software when the Jetson is in recovery mode…following recovery mode the Jetson is rebooted and then JetPack behaves differently. This is a second step, and althoug you can check several things you want to do in the JetPack menu, additional package install can occur at any time you wish. Simply uncheck the “flash” target, and while the Jetson is running normally, tell JetPack you want to install the various packages on the Jetson. Some of those packages are sample programs for TensorRT and related.

The NVIDIA-375 driver does not apply to a Jetson. This is for a PCI video card, whereas the video card/GPU of the Jetson is directly wired to the memory controller. Parts of programming information which are not PCI will mostly work on a Jetson, but if the interface requires PCI features then you are guaranteed the feature cannot work on a Jetson. There may be other cases where a particular CPU architecture is required (such as x86), and therefore only works on that CPU architecture.

I can’t be of much help with the other questions, but do know that the various APIs for CUDA and related do have separate documentation available (those technologies are not specific to a Jetson). I would suggest starting by searching whatever technology you are interested in at the download center:
[url]https://developer.nvidia.com/embedded/downloads[/url]

Within that documentation downloads page you’ll see there is a menu for “LIBRARIES” and “DEVELOPER DOCUMENTATION”. Also, within “TOOLS”, you might check the “Compute” check box.

Hi,

  1. We switch the default account to nvidia on JetPack3.1. Don’t worry.

  2. Jetpack will automatically install NVIDIA package, including CUDA, cuDNN, tensorRT.

  3. For tensorflow, the easiest approach is to install other user’s wheel. It’s also workable to build tensorflow from source.
    Whl: https://devtalk.nvidia.com/default/topic/999726
    Build from source: http://www.yuthon.com/2017/03/10/TensorFlow-r1-0-on-TX1/
    JetsonHacks tutorial(TX2): Build TensorFlow on NVIDIA Jetson TX2 Development Kit - JetsonHacks

  4. YES. More precisely, tensorRT&tensorflow use cuDNN API for acceleration.

Hi AastaLLL,

Thanks for your suggestions. I have some follow-up questions:

  1. It appears that the Yuthon blog instructions are still not working. If you go to the bottom of that post, he indicates that he is still getting errors in the section called ‘Problems’

  2. The jetsonhacks site that you referenced has nice instructions, but they are for installing Tensorflow on a TX2. I have a TX1, and am skeptical that those instructions will work. Has anyone at NVIDIA actually gotten Tensorflow running on a TX1 platform?

  3. I am not strongly attached to using the Tensorflow framework. Would it be more straightforward to convert my model to Caffe and then use these instructions: Caffe Deep Learning Framework - 64-bit NVIDIA Jetson TX1 - JetsonHacks

  4. What is NVIDIA’s recommendation for the best framework to run on the TX1 platform?

  5. Do you have any examples that show a step-by-step process for creating a simple model within a desired framework, and then running that model on the Jetson TX1 platform?

Asd56,

https://devblogs.nvidia.com/parallelforall/jetpack-doubles-jetson-inference-perf/

for some information on deploying a neural network model on a Jetson using TensorRT. That’s with an older version of TensorRT.

We do test TensorRT on both Jetson TX1 and TX2, but to be clear the use case we are designing for would have you do the training on the host system and then move the model into the Jetson TX1 for inference.

Kind Regards,
Chris

Hi asd56,

  1. Yothon blog worked before(maybe three-month ago).
    There is too much dependency when building tensorflow, ex. Bazel, protobuf. Not sure the current status.

  2. Should be similar.
    We have built Tensorflow successfully on TX1 before. But we switch to TX2 for a while.

  3. We have some topics discussing the model translation from TensorFlow to Caffe.
    No exist parser or tool is found. If you have one, welcome to share!!

  4. TensorRT.

  5. We have an example for Caffe+TensorRT, check here:
    GitHub - dusty-nv/jetson-inference: Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.

Hi,
Is there standalone Jetpack 3.1 installation?
I got TX1 with Jetpack 2.0, but just want to get demo examples from Jetpack 3.0 or 3.1 without reflashing whole OS and image.

Thanks,

After the installer has run for JetPack (you don’t have to flash anything to run it) file repository.json is created. Within this file is a description of all of the package downloads which produce a local repository on the Jetson (some downloads are for host). You can use wget on those files and manually copy them to the Jetson.

An example of how I run JetPack on Fedora just to get repository.json (adjust for your version):

bash ./JetPack-L4T-<your_version>-linux-x64.run --noexec
cd _installer
less repository.json
/http.*<li>deb
# 'n' key to see next, 'p' key to see previous
wget http://whatever....deb

Beware that this doesn’t mean the downloaded files will be compatible with a different L4T install. If dpkg lets you install the package it should work. Sometimes order of install will matter. Mostly you should just use JetPack for the package installs unless you cannot (I have a Fedora host so this is how I have to get files).