Trt converter

Hi, I have a jetson nano tx1. I flashed Jetpack 4.6.1 and I have python3.6.9 and tensorflow 2.7.0. I want to convert my model to tensorrt. I saved my model weights in saved model format. And I convert as follows but I get an error.
Can you hel me please?

from tensorflow.python.compiler.tensorrt import trt_convert as trt

SAVED_MODEL_DIR=“Desktop/models-20231124T092814Z-001/models/native_saved_model/”
converter = trt.TrtGraphConverterV2(
… input_saved_model_dir=SAVED_MODEL_DIR,
… precision_mode=trt.TrtPrecisionMode.FP32
… )
INFO:tensorflow:Linked TensorRT version: (8, 2, 1)
INFO:tensorflow:Loaded TensorRT version: (8, 2, 1)
trt_func = converter.convert()

error:

2024-01-11 23:12:27.412273: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1019] ARM64 does not support NUMA - returning NUMA node zero
2024-01-11 23:12:27.831381: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1019] ARM64 does not support NUMA - returning NUMA node zero
2024-01-11 23:12:27.831668: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1019] ARM64 does not support NUMA - returning NUMA node zero
2024-01-11 23:12:27.835340: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1019] ARM64 does not support NUMA - returning NUMA node zero
2024-01-11 23:12:27.835681: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1019] ARM64 does not support NUMA - returning NUMA node zero
2024-01-11 23:12:27.835930: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1019] ARM64 does not support NUMA - returning NUMA node zero
2024-01-11 23:12:34.337417: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1019] ARM64 does not support NUMA - returning NUMA node zero
2024-01-11 23:12:34.337807: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1019] ARM64 does not support NUMA - returning NUMA node zero
2024-01-11 23:12:34.338048: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:1019] ARM64 does not support NUMA - returning NUMA node zero
2024-01-11 23:12:34.349870: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1525] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 271 MB memory: → device: 0, name: NVIDIA Tegra X1, pci bus id: 0000:00:00.0, compute capability: 5.3
Traceback (most recent call last):
File “/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py”, line 4098, in _get_op_def
return self._op_def_cache[type]
KeyError: ‘DisableCopyOnRead’

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “/usr/local/lib/python3.6/dist-packages/tensorflow/python/saved_model/load.py”, line 939, in load_internal
ckpt_options, options, filters)
File “/usr/local/lib/python3.6/dist-packages/tensorflow/python/saved_model/load.py”, line 139, in init
meta_graph.graph_def.library, wrapper_function=_WrapperFunction))
File “/usr/local/lib/python3.6/dist-packages/tensorflow/python/saved_model/function_deserialization.py”, line 388, in load_function_def_library
func_graph = function_def_lib.function_def_to_graph(copy)
File “/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/function_def_to_graph.py”, line 64, in function_def_to_graph
fdef, input_shapes)
File “/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/function_def_to_graph.py”, line 229, in function_def_to_graph_def
op_def = default_graph._get_op_def(node_def.op) # pylint: disable=protected-access
File “/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py”, line 4103, in _get_op_def
buf)
tensorflow.python.framework.errors_impl.NotFoundError: Op type not registered ‘DisableCopyOnRead’ in binary running on jetson-desktop. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) tf.contrib.resampler should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “”, line 1, in
File “/usr/local/lib/python3.6/dist-packages/tensorflow/python/compiler/tensorrt/trt_convert.py”, line 1216, in convert
self._input_saved_model_tags)
File “/usr/local/lib/python3.6/dist-packages/tensorflow/python/saved_model/load.py”, line 900, in load
result = load_internal(export_dir, tags, options)[“root”]
File “/usr/local/lib/python3.6/dist-packages/tensorflow/python/saved_model/load.py”, line 942, in load_internal
str(err) + "\n You may be trying to load on a different device "
FileNotFoundError: Op type not registered ‘DisableCopyOnRead’ in binary running on jetson-desktop. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) tf.contrib.resampler should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.
You may be trying to load on a different device from the computational device. Consider setting the experimental_io_device option in tf.saved_model.LoadOptions to the io_device such as ‘/job:localhost’.

Hi @karabuluttr23 ,
This looks like a issue coming up from TF.
Would you mind raising it on concerned forum.

Thanks