Tensortrt AttributeError: '_UserObject' object has no attribute 'add_slot'

I am getting this error while converting a finetuned model only, when i am just using a pretrained model its working fine.

2021-11-04 16:06:55.345573: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-11-04 16:06:55.349826: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-11-04 16:06:55.350105: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-11-04 16:06:55.350528: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-11-04 16:06:55.350809: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-11-04 16:06:55.351071: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-11-04 16:06:55.351302: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-11-04 16:06:55.766650: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-11-04 16:06:55.766916: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-11-04 16:06:55.767136: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-11-04 16:06:55.767349: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1510] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 2440 MB memory: → device: 0, name: GeForce GTX 1650, pci bus id: 0000:01:00.0, compute capability: 7.5
Traceback (most recent call last):
File “scripts/convert_saved_model_trt_engine.py”, line 20, in
converter.convert()
File “/home/hexa/miniconda3/envs/TRT/lib/python3.8/site-packages/tensorflow/python/compiler/tensorrt/trt_convert.py”, line 1096, in convert
self._saved_model = load.load(self._input_saved_model_dir,
File “/home/hexa/miniconda3/envs/TRT/lib/python3.8/site-packages/tensorflow/python/saved_model/load.py”, line 864, in load
result = load_internal(export_dir, tags, options)[“root”]
File “/home/hexa/miniconda3/envs/TRT/lib/python3.8/site-packages/tensorflow/python/saved_model/load.py”, line 902, in load_internal
loader = loader_cls(object_graph_proto, saved_model_proto, export_dir,
File “/home/hexa/miniconda3/envs/TRT/lib/python3.8/site-packages/tensorflow/python/saved_model/load.py”, line 162, in init
self._load_all()
File “/home/hexa/miniconda3/envs/TRT/lib/python3.8/site-packages/tensorflow/python/saved_model/load.py”, line 259, in _load_all
self._load_nodes()
File “/home/hexa/miniconda3/envs/TRT/lib/python3.8/site-packages/tensorflow/python/saved_model/load.py”, line 448, in _load_nodes
slot_variable = optimizer_object.add_slot(
AttributeError: ‘_UserObject’ object has no attribute ‘add_slot’

This is the code I am using to convert to tensorrt:

import tensorflow as tf
params = tf.experimental.tensorrt.ConversionParams(
    precision_mode='FP32')
converter = tf.experimental.tensorrt.Converter(
    input_saved_model_dir="saved_models", conversion_params=params)
converter.convert()

converter.save(output_saved_model_dir="trt_model_FP32")

Above code work when the pretrained model is not modified. But when change the last layer for my custom data the model saved gave error. This is how I modifying last layer to predict 2 claases.

model = ResNet50(include_top=False, weights="imagenet")
inputs = tf.keras.Input(shape=(128, 128, 3))
output = model(inputs)
output = tf.keras.layers.GlobalAveragePooling2D()(output)
output = tf.keras.layers.Dense(2)(output)
model = tf.keras.Model(inputs, output)
model.save('saved_models')

Machine specfications are below:
OS: Ubuntu-18.04
Cuda: 10.2, Cudnn: 7.6.5.32
Tensorflow: 2.6
Tensorrt: 7.2.3.4

Hi @danish.saba
Could you please share the pretrained model that you are trying to optimize here so we can help better?

Thanks

Run above code in TensorFlow and it will download for you.
here is the link to the model: resnet50

@SunilJB any update on this? let me know if you need any more info regarding this.

1 Like

I am having the same issue! Can you help me out if you have solved it…Your help is appreciated.

Hi @sls14.will,

We recommend you to please try on the latest TensorRT version 8.4 GA, If you still face this issue, please create a new post with the issue repro model/scripts.

Thank you.