I’m trying to install ISAAC SDK dependencies on Jetson TX2. I seem to need it in order to install Bazel. (I saw on a post that I need it to install Tensorflow 2.0 on TX2) However, the dependencies seem to hang and won’t install on my Jetson. Any idea what might be wrong? Please see attached screenshot.
Does Jetpack 4.2 support Tensorflow 2.0?
I tried installing tensorflow using the command:
sudo pip3 install --pre --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v42 tensorflow-gpu
from the link that you gave. It apparently installs Tensorflow 1.14 only.
Sorry that TensorFlow 2.0 is in our future plan but not ready yet.
For now, you can build it from source or using the package provided from the community.
Thanks. I installed Tensorflow 2.0 using the above gitlab link. However, my TX2 doesn’t seem to output anything related to Tensorflow when I run it on command prompt. Could it be low memory issue? Also, I notice a message that says “ARM64 does not support NUMA”
Please see attached screenshots. Thanks
The NUMA log is a harmless warning so you don’t need to worry about it.
Based on the screenshot of 12-13-04, it looks like the TensorFlow is executed without issue.
The blank output issue seems related to the DISPLAY.
Which script do you use for testing.
Would you mind to turn-off the display(ex. im.show()) and try it again?
Please see attached script. I first two outputs are regular python and Numpy. The third is Tensorflow which doesn’t seem to output. Not sure why Tensorflow won’t execute.
Ok I modified the attached script slightly by changing the ‘1e6’ to ‘1e3’ and was finally getting an output from Tensorflow. Apparently, Tensorflow or the Jetson TX2 is running much slower than normal. Any idea why? Is there anything that I can do to speed it up? Thanks
That’s because TensorFlow doesn’t optimize for the Jetson platform.
So the implementation tends to be memory intensive and includes some processes switch(CPU<->GPU) on Jetson.
It’s recommended to convert your model into TensorRT engine.
TensorRT will choose a suitable algorithm based on available GPU memory and architecture.
It should give you a much better performance.