I am find the way Transfer model generated by tensorflow to tensorRT.
I am using Jetson AGX orin with jetpack 5.1.2.
so there ai TensorRT is installed and the tesnorflow was installed what followed from the nvidia guidance. but the program give me error message what it cannot be detected the tensorRT.
jetson@ubuntu:~/relsense_ssd/testing_utils$ python3 tfRT.py
ERROR:tensorflow:Tensorflow needs to be built with TensorRT support enabled to allow TF-TRT to operate.
Traceback (most recent call last):
File “tfRT.py”, line 12, in
converter = trt.TrtGraphConverterV2(
File “/home/jetson/.local/lib/python3.8/site-packages/tensorflow/python/util/deprecation.py”, line 576, in new_func
return func(*args, **kwargs)
File “/home/jetson/.local/lib/python3.8/site-packages/tensorflow/python/compiler/tensorrt/trt_convert.py”, line 1259, in init
_check_trt_version_compatibility()
File “/home/jetson/.local/lib/python3.8/site-packages/tensorflow/python/compiler/tensorrt/trt_convert.py”, line 223, in _check_trt_version_compatibility
raise RuntimeError(“Tensorflow has not been built with TensorRT support.”)
RuntimeError: Tensorflow has not been built with TensorRT support.
This is error message and this is my code.
from tensorflow.python.compiler.tensorrt import trt_convert as trt
저장된 모델의 디렉토리 경로
saved_model_dir = ‘…/checkpoints/’
SavedModel 변환
conversion_params = trt.DEFAULT_TRT_CONVERSION_PARAMS._replace(
precision_mode=trt.TrtPrecisionMode.FP16, # 예시 정밀도 모드
max_workspace_size_bytes=(1 << 28))
converter = trt.TrtGraphConverterV2(
input_saved_model_dir=saved_model_dir,
conversion_params=conversion_params)
converter.convert()
converter.save(output_saved_model_dir=‘…/checkpoints_trt/’)
Thanks a lot