There is a serious problem with the JetsonTX2 that makes it close to impossible to implement for a mining company

First of all, i reflashed my Jetson TX2 with JetPack 3.3. I confirmed everything is working fine by running the samples after.

I work in a gold mining company and im trying to showcase an experiment to sort ore rocks i made on my workstation where I use windows 10 and used Tensorflow on a quadro and was able to make it work.

Now, i want to inference it and thats why we bought a Jetson TX2. However, there is a serious lack of documentation on how to transfer a Tensorflow model to Tensor RT. I spent HOURS searching for a clue and i found the nvidia developer video who talked about ONNX. Great! So i tried however the frontpage of the Tensorflow webpage and the tutorials are asking to use Keras model = sequential() etc

However your video shows to freeze a graph and use that for TensorRT to optimze. This is failure point 1: There is no documentation on what we should do with a model that doesnt explicitly use a TensorFlow Session… what do we freeze??

Ok so we move forward by ignoring the whole TensorRT part and basically we should be able to playback at decent speed using TensorFlow-GPU at least the same idea i used on my quadro.

I created an image classifier CNN using TensorFlow and trained it got it to save the weights after training an load the weights when i need to use it, on my quadro. Now I want to do the inference on the Jetson TX2…

Problem #1: Python 3 is completely useless on the JetsonTX2. I cannot use OpenCV i see it installed and Python 2.7 which was also preinstalled finds it no problem. I cannot find how to bind python3 to opencv.

After HOURS wasted, i give up on pthon 3 and instead try python 2.7… Now i have to install TensorFlow-GPU and some other things… no problem! However it is impossible to install keras!

it keeps failing to find some wheel file and throws this:
Failed building wheel for h5py
Running setup.py clean for h5py
Successfully built pyyaml
Failed to build scipy h5py
Installing collected packages: scipy, six, numpy, keras-preprocessing, pyyaml, h5py, keras-applications, keras
Running setup.py install for scipy … error

then at the end it throws this:
config = setup_module.configuration(*args)
File “scipy/linalg/setup.py”, line 19, in configuration
raise NotFoundError(‘no lapack/blas resources found’)
numpy.distutils.system_info.NotFoundError: no lapack/blas resources found

----------------------------------------

Command “/usr/bin/python -u -c “import setuptools, tokenize;file=‘/tmp/pip-build-KmHg_7/scipy/setup.py’;exec(compile(getattr(tokenize, ‘open’, open)(file).read().replace(‘\r\n’, ‘\n’), file, ‘exec’))” install --record /tmp/pip-scg9OB-record/install-record.txt --single-version-externally-managed --compile --user --prefix=” failed with error code 1 in /tmp/pip-build-KmHg_7/scipy/

I cannot get even the basic Tensorflow to work!!!

I know i can use Tensorflow without Keras, but TensorFlow themselves are suggesting to use Keras and all tutorials are using keras so i really need to get this last step going otherwise the whole point of this kit is not worth it!

I tried to upgrade setuptools and pip and everything i found on the internet.,

why isnt this packaged with JetPack3.3??

I am not a unix expert but i have some decent knowledge. However we are a mining company and we dont use linux… we are also not a technology company, we dont have a team of unix sysadmins. we cant be wasting time on setting the thing up and having so many issues getting a basic experiment to work…
The whole point of JetPack and Jetson was to make it easy for us, one guy developers, to bring solution to problems without having to recompile ubuntu

I’m trying my best to try bring edge computing to mining, people are excited about it but so far the JetsonTX2 has been a failure for us due to the lack of documentation, support, clear examples, instructions, staying up to date with their partner TensorFlow, and setup issues i have outlined in here.

Is there anything I can do to fix this before i just discard the whole idea of edge computing?
Take the example of tensorflow tutorials homepage:

import tensorflow as tf
mnist = tf.keras.datasets.mnist

(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0

model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation=tf.nn.relu),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer=‘adam’,
loss=‘sparse_categorical_crossentropy’,
metrics=[‘accuracy’])

model.fit(x_train, y_train, epochs=5)
model.evaluate(x_test, y_test)

How do i convert this to TensorRT so i dont have to suffer through installing keras etc?

Thank you and I hope that somebody at NVIDIA takes these seriously because this is a game breaker for us as it makes support impossible!

Hi LeMoth,

Thanks for reaching out. Sounds like an interesting project that you’re working on.

In order to optimize with TensorRT, you first need to obtain a ‘frozen graph’ corresponding to your model.

Keras is a high-level API that wraps TensorFlow calls, and under the hood is represented by a TensorFlow graph. The first step would be to get the TensorFlow ‘GraphDef’ representing your model.

graph_def = tf.keras.backend.get_session().graph.as_graph_def()

Next, you’ll need to know the names of the input and output nodes in your graph. It may help to use the ‘TensorBoard’ tool to visualize your graph and determine this information. There should be external documentation on how to do this. If you need more details, let me know.

Once you’ve determined the output names, you can use TensorRT integration in TensorFlow to optimize your model for inference. We have a project that demonstrates this for image classification and object detection models

https://github.com/NVIDIA-Jetson/tf_trt_models

You can then load this graph directly with TensorFlow and call session.run() to perform inference. The examples in the above project should provide some insight on how to do this.

Hope this helps, let me know if you have questions.

John

Hi John,

Thank you for the quick reply. Luckily some guy in the office knew a bit more than myself about unix and we were able to finally get past that wheel error when installing Keras. We found out that we had to manually install a lib first by running this:

sudo apt-get install libhdf5-dev

and then the keras installation or more precisely the installation of h5py completed successfully! I was then able to run my tensor project on the TX2 which is a great relief for me!

I am going to follow your tutorial and github links and try to use TensorRT. Thank you very much for this lead.

I do have another question
Given that my TX2 is now ready for deployment for our trial project, can i make a backup of whats on this TX2 right now (OS, jetpack, libraries and external dependencies i had to install) and flash this on another TX2? It would make my life much easier trying to write a plan for the I.T. guys to be able to simply follow and deploy.

LM

Hi LeMoth,

Please refer to https://elinux.org/Jetson/TX2_Cloning

Thanks