TF-TRT breaking on simple MNIST model

I created a very simple proof of concept to test TF-TRT conversion but it does not work.

First I created the model:

import tensorflow as tf
import tensorflow_datasets as tfds

(ds_train, ds_test), ds_info = tfds.load(
    'mnist',
    split=['train', 'test'],
    shuffle_files=True,
    as_supervised=True,
    with_info=True,
)

def normalize_img(image, label):
  """Normalizes images: `uint8` -> `float32`."""
  return tf.cast(image, tf.float32) / 255., label

ds_train = ds_train.map(
    normalize_img, num_parallel_calls=tf.data.AUTOTUNE)
ds_train = ds_train.cache()
ds_train = ds_train.shuffle(ds_info.splits['train'].num_examples)
ds_train = ds_train.batch(128)
ds_train = ds_train.prefetch(tf.data.AUTOTUNE)

ds_test = ds_test.map(
    normalize_img, num_parallel_calls=tf.data.AUTOTUNE)
ds_test = ds_test.batch(128)
ds_test = ds_test.cache()
ds_test = ds_test.prefetch(tf.data.AUTOTUNE)

model = tf.keras.models.Sequential([
  tf.keras.layers.Flatten(input_shape=(28, 28)),
  tf.keras.layers.Dense(512, activation='relu'),
  tf.keras.layers.Dense(10)
])
model.compile(
    optimizer=tf.keras.optimizers.Adam(0.001),
    loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
    metrics=[tf.keras.metrics.SparseCategoricalAccuracy()],
)

model.fit(
    ds_train,
    epochs=6,
    validation_data=ds_test,
)

model.save("saved_model")

Then I tried to convert to trt:

from tensorflow.python.compiler.tensorrt import trt_convert as trt
import tensorflow as tf

converter = trt.TrtGraphConverterV2(input_saved_model_dir="saved_model")
converter.convert()
converter.save("output")

But it crashes:

Traceback (most recent call last):
  File "save_trt.py", line 5, in <module>
    converter.convert()
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/compiler/tensorrt/trt_convert.py", line 1196, in convert
    self._input_saved_model_tags)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/saved_model/load.py", line 864, in load
    result = load_internal(export_dir, tags, options)["root"]
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/saved_model/load.py", line 903, in load_internal
    ckpt_options, options, filters)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/saved_model/load.py", line 162, in __init__
    self._load_all()
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/saved_model/load.py", line 259, in _load_all
    self._load_nodes()
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/saved_model/load.py", line 448, in _load_nodes
    slot_variable = optimizer_object.add_slot(
AttributeError: '_UserObject' object has no attribute 'add_slot'

I am on a Xavier AGX with tensorflow 2.6.2 in jetpack 4.6.1.

Thanks!

Hi,

Just want to confirm first.
Do you set up your device with JetPack 4.6.1? The version is released this week.

If yes, please noted that you will need to install the v2.7.0+nv22.1 prebuilt TensorFlow package for compatibility:
https://developer.download.nvidia.com/compute/redist/jp/v461/tensorflow/

Thanks.

We have jetpack 4.6.1.
Tensorflow was installed the recommended way, with NVIDIA’s indexes for pip3: Installing TensorFlow for Jetson Platform :: NVIDIA Deep Learning Frameworks Documentation, the instalation for v461 crashes because the TensorRT version is 8.0.1, not 8.2, this was fixed using the v46 version, but everything else TRT seems to work correctly.

We uninstalled tensorflow 2.6.2 and installed tensorflow 2.7 and get this error when converting:

ERROR:tensorflow:Loaded TensorRT 8.0.1 but linked TensorFlow against TensorRT 8.2.1. A few requirements must be met:
	-It is required to use the same major version of TensorRT during compilation and runtime.
	-TensorRT does not support forward compatibility. The loaded version has to be equal or more recent than the linked version.
Traceback (most recent call last):
  File "save_trt.py", line 4, in <module>
    converter = trt.TrtGraphConverterV2(input_saved_model_dir="saved_model")
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/deprecation.py", line 552, in new_func
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/compiler/tensorrt/trt_convert.py", line 1104, in __init__
    _check_trt_version_compatibility()
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/compiler/tensorrt/trt_convert.py", line 262, in _check_trt_version_compatibility
    raise RuntimeError("Incompatible TensorRT major version")
RuntimeError: Incompatible TensorRT major version

Please be reminded that the jetson instalation was done with the docker cli version of sdkmanager using these commands:

docker run -it --rm --privileged -v /dev/bus/usb:/dev/bus/usb/ --name JetPack_TX2_Devkit sdkmanager --cli install --logintype devzone --product Jetson --target P2888-0001 --targetos Linux --version 4.6 --select 'Jetson OS' --deselect 'Jetson SDK Components' --flash all --license accept --staylogin true --datacollection disable --exitonfinish
docker run -it --rm --privileged -v /dev/bus/usb:/dev/bus/usb/ --name JetPack_TX2_Devkit sdkmanager --cli install --logintype devzone --product Jetson --target P2888-0001 --targetos Linux --version 4.6 --deselect 'Jetson OS' --select 'Jetson SDK Components' --flash all --license accept --staylogin true --datacollection disable --exitonfinish

Notice how the version is 4.6, but the installed version is 4.6.1. Running cat /etc/nv_tegra_release afterwards in the Xavier:

# R32 (release), REVISION: 6.1, GCID: 27863751, BOARD: t186ref, EABI: aarch64, DATE: Mon Jul 26 19:36:31 UTC 2021

Hi,

It seems that you are confused about the JetPack version and OS branch.
Please noted that JetPack 4.6.1 includes OS r32.7.1.

$ apt show nvidia-jetpack 
Package: nvidia-jetpack
Version: 4.6.1-b110
Priority: standard
Section: metapackages
Maintainer: NVIDIA Corporation
Installed-Size: 199 kB
Depends: nvidia-cuda (= 4.6.1-b110), nvidia-opencv (= 4.6.1-b110), nvidia-cudnn8 (= 4.6.1-b110), nvidia-tensorrt (= 4.6.1-b110), nvidia-visionworks (= 4.6.1-b110), nvidia-container (= 4.6.1-b110), nvidia-vpi (= 4.6.1-b110), nvidia-l4t-jetson-multimedia-api (>> 32.7-0), nvidia-l4t-jetson-multimedia-api (<< 32.8-0)
Homepage: http://developer.nvidia.com/jetson
Download-Size: 29.4 kB
APT-Sources: https://repo.download.nvidia.com/jetson/t194 r32.7/main arm64 Packages
Description: NVIDIA Jetpack Meta Package
$ cat /etc/nv_tegra_release 
# R32 (release), REVISION: 7.1, GCID: 29818004, BOARD: t186ref, EABI: aarch64, DATE: Sat Feb 19 17:07:00 UTC 2022

With JetPack 4.6.1, we can install tensorflow-2.7.0+nv22.1 without error.
Set JP_VERSION=461, the command looks like as below:

$ sudo pip3 install --pre --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v461 tensorflow
...
Successfully installed absl-py-0.12.0 astunparse-1.6.3 cachetools-4.2.4 charset-normalizer-2.0.12 clang-5.0 dataclasses-0.8 flatbuffers-1.12 google-auth-2.6.0 google-auth-oauthlib-0.4.6 google-pasta-0.2.0 grpcio-1.45.0rc1 importlib-metadata-4.8.3 keras-2.8.0 markdown-3.3.6 oauthlib-3.2.0 opt-einsum-3.3.0 pyasn1-0.4.8 pyasn1-modules-0.2.8 requests-2.27.1 requests-oauthlib-1.3.1 rsa-4.8 six-1.16.0 tensorboard-2.8.0 tensorboard-data-server-0.6.1 tensorboard-plugin-wit-1.8.1 tensorflow-2.7.0+nv22.1 tensorflow-estimator-2.8.0 termcolor-1.1.0 typing-extensions-4.1.1 werkzeug-2.0.3 wheel-0.37.1 wrapt-1.14.0 zipp-3.6.0

Thanks.

1 Like

I understand that I don’t have JetPack 4.6.1 now. I will try to install it and will report back.

Updated the Xavier to jetpack 4.6.1 for real and it works now. Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.