Tensorflow 2.1.0 not working on Jetson Nano

After having installed the Jetpack 4.4 image on a SD card, I have tried to execute some simple code using tensorflow 2.1.0 but it seems that the code is not recognize as tf2

import tensorflow as tf
import os
import numpy as np
dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1])
print(dataset)

I get:

2020-05-11 16:50:56.809451: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.2
2020-05-11 16:50:59.381362: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libnvinfer.so.7
2020-05-11 16:50:59.383926: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libnvinfer_plugin.so.7
<DatasetV1Adapter shapes: (), types: tf.int32>

Instead of:

<TensorSliceDataset shapes: (), types: tf.int32>

Why it is recognized as DatasetV1Adapter instead of TensorSliceDataset?
It seems that every tf variable is recognized as tf1 instead of tf2

My environment uses:
Jetpack 4.4
tensorflow-2.1.0+nv20.4-cp36-cp36m-linux_aarch64.whl
Python 3.6.9

sudo apt-cache show nvidia-jetpack

Package: nvidia-jetpack
Version: 4.4-b144
Architecture: arm64
Maintainer: NVIDIA Corporation
Installed-Size: 195
Depends: nvidia-container-csv-cuda (= 10.2.89-1), libopencv-python (= 4.1.1-2-gd5a58aa75), libvisionworks-sfm-dev (= 0.90.4.501), libvisionworks-dev (= 1.6.0.501), libnvparsers7 (= 7.1.0-1+cuda10.2), libnvinfer-plugin-dev (= 7.1.0-1+cuda10.2), libnvonnxparsers7 (= 7.1.0-1+cuda10.2), libnvinfer-samples (= 7.1.0-1+cuda10.2), libnvinfer-bin (= 7.1.0-1+cuda10.2), libvisionworks-samples (= 1.6.0.501), libvisionworks-tracking-dev (= 0.88.2.501), vpi-samples (= 0.2.0), tensorrt (= 7.1.0.16-1+cuda10.2), libopencv (= 4.1.1-2-gd5a58aa75), libnvinfer-doc (= 7.1.0-1+cuda10.2), libnvparsers-dev (= 7.1.0-1+cuda10.2), libnvidia-container0 (= 0.9.0~beta.1), nvidia-container-csv-visionworks (= 1.6.0.501), cuda-toolkit-10-2 (= 10.2.89-1), graphsurgeon-tf (= 7.1.0-1+cuda10.2), libcudnn8 (= 8.0.0.145-1+cuda10.2), libopencv-samples (= 4.1.1-2-gd5a58aa75), nvidia-container-csv-cudnn (= 8.0.0.145-1+cuda10.2), python-libnvinfer-dev (= 7.1.0-1+cuda10.2), libnvinfer-plugin7 (= 7.1.0-1+cuda10.2), libvisionworks (= 1.6.0.501), libcudnn8-doc (= 8.0.0.145-1+cuda10.2), nvidia-container-toolkit (= 1.0.1-1), libnvinfer-dev (= 7.1.0-1+cuda10.2), nvidia-l4t-jetson-multimedia-api (>> 32.4-0), nvidia-l4t-jetson-multimedia-api (<< 32.5-0), libopencv-dev (= 4.1.1-2-gd5a58aa75), vpi-dev (= 0.2.0), vpi (= 0.2.0), libcudnn8-dev (= 8.0.0.145-1+cuda10.2), python3-libnvinfer (= 7.1.0-1+cuda10.2), python3-libnvinfer-dev (= 7.1.0-1+cuda10.2), opencv-licenses (= 4.1.1-2-gd5a58aa75), nvidia-container-csv-tensorrt (= 7.1.0.16-1+cuda10.2), libnvinfer7 (= 7.1.0-1+cuda10.2), libnvonnxparsers-dev (= 7.1.0-1+cuda10.2), uff-converter-tf (= 7.1.0-1+cuda10.2), nvidia-docker2 (= 2.2.0-1), libvisionworks-sfm (= 0.90.4.501), libnvidia-container-tools (= 0.9.0~beta.1), nvidia-container-runtime (= 3.1.0-1), python-libnvinfer (= 7.1.0-1+cuda10.2), libvisionworks-tracking (= 0.88.2.501)
Conflicts: cuda-command-line-tools-10-0, cuda-compiler-10-0, cuda-cublas-10-0, cuda-cublas-dev-10-0, cuda-cudart-10-0, cuda-cudart-dev-10-0, cuda-cufft-10-0, cuda-cufft-dev-10-0, cuda-cuobjdump-10-0, cuda-cupti-10-0, cuda-curand-10-0, cuda-curand-dev-10-0, cuda-cusolver-10-0, cuda-cusolver-dev-10-0, cuda-cusparse-10-0, cuda-cusparse-dev-10-0, cuda-documentation-10-0, cuda-driver-dev-10-0, cuda-gdb-10-0, cuda-gpu-library-advisor-10-0, cuda-libraries-10-0, cuda-libraries-dev-10-0, cuda-license-10-0, cuda-memcheck-10-0, cuda-misc-headers-10-0, cuda-npp-10-0, cuda-npp-dev-10-0, cuda-nsight-compute-addon-l4t-10-0, cuda-nvcc-10-0, cuda-nvdisasm-10-0, cuda-nvgraph-10-0, cuda-nvgraph-dev-10-0, cuda-nvml-dev-10-0, cuda-nvprof-10-0, cuda-nvprune-10-0, cuda-nvrtc-10-0, cuda-nvrtc-dev-10-0, cuda-nvtx-10-0, cuda-samples-10-0, cuda-toolkit-10-0, cuda-tools-10-0, libcudnn7, libcudnn7-dev, libcudnn7-doc, libnvinfer-plugin6, libnvinfer6, libnvonnxparsers6, libnvparsers6
Homepage: http://developer.nvidia.com/jetson
Priority: standard
Section: metapackages
Filename: pool/main/n/nvidia-jetpack/nvidia-jetpack_4.4-b144_arm64.deb
Size: 30394
SHA256: 1d9d4937623862e4990d25df9a0dd09c78ddbbc4919d1f4c9bf4cd8df09b8869
SHA1: 0608076bbb7ee28f2c388532594ff1951f99e61b
MD5sum: 2c12a5042171a8caa2dd3e4a32246cd2
Description: NVIDIA Jetpack Meta Package
Description-md5: ad1462289bdbc54909ae109d1d32c0a8

pip3 show tensorflow

Name: tensorflow
Version: 2.1.0+nv20.4
Summary: TensorFlow is an open source machine learning framework for everyone.
Home-page: https://www.tensorflow.org/
Author: Google Inc.
Author-email: packages@tensorflow.org
License: Apache 2.0
Location: /usr/local/lib/python3.6/dist-packages
Requires: google-pasta, numpy, keras-applications, gast, wrapt, scipy, astor, tensorboard, keras-preprocessing, absl-py, termcolor, wheel, protobuf, tensorflow-estimator, grpcio, six, opt-einsum
Required-by:

I have also updated all versions but I always get the same issue

Hi,

We are checking this issue.
Will share more information with you later.

Thanks.

I have found in this post a workaround to this issue:
https://forums.developer.nvidia.com/t/official-tensorflow-for-jetson-nano/71770/127

It looks like the current Tensorflow for JP 4.4 was compiled with --config=v1 flag., as V2 behaviour seems to be disabled in default.

The workaround is:

import tensorflow.compat.v2 as tf
import tensorflow.compat.v2.keras as keras
tf.enable_v2_behavior()

1 Like

This is not a solution if you have all kinds of official packages that depend on tensorflow v2.1.0’s interfaces. Its quite unfeasible to change all these packages to use this workaround.

Please NVIDIA, release a new version of either V2.1.0 or a later version for JetPack 4.4,. because as it is now we can’t use tensorflow v2.1.0 in any meaningful way on the Jetson Xavier NX and because we can’t downgrade to an earlier Jetpack (as this is the first JetPack that is compatible with the Jetson Xavier NX!

It’s been more than two months since the release of the faulty v2.1.0, what is the holdup for getting us a working v2.1.x?

2 Likes