Hello NVIDIA Developers,
Currently, doing transfer learning from the pre-trained SSD-MobileNet-v2 model, I have created and trained my own neural network with the goal of detecting a single class of objects, people. Therefore, I use tensorflow version 1.15.5.
I use the tensorflow==1.15.5+nv21.5 version and not tensorflow-gpu.
The jetson Nano runs under Ubuntu 18.04 and the Jetpack version is 4.5.1.
My goal is to integrate this model on a Jetson Nano and have it do real time processing.
Being too heavy for an embedded system like the Jetson Nano, I decide to optimize this code with TF-TRT (Tensorflow to TensorRT).
I have a saved_model.pb file containing the pre-trained modemais je n’ai pas su où le placer dans ce codel.
I have already found many codes available on the Internet but none of them uses a pre-trained .pb file.
Among these sites, I tried this one: Accelerating Inference In TF-TRT User Guide :: NVIDIA Deep Learning Frameworks Documentation
It gives this:
import tensorflow as tf
from tensorflow.python.compiler.tensorrt import trt_convert as trt
input_saved_model_dir = "r/home/images"
output_saved_model_dir = "r/home/tf_trt"
converter = trt.TrtGraphConverter(
input_saved_model_dir=input_saved_model_dir,
max_workspace_size_bytes=(11<32),
precision_mode="FP16",
maximum_cached_engines=100)
converter.convert()
converter.save(output_saved_model_dir)
with tf.Session() as sess:
# First load the SavedModel into the session
tf.saved_model.loader.load(
sess, [tf.saved_model.tag_constants.SERVING],
output_saved_model_dir)
output = sess.run([output_tensor], feed_dict={input_tensor: input_data})
The path to the folder containing the .pb model and the .pbtxt model is : “r/home/saved_model” but I didn’t know where to put it in this code.
When I run this python code via python3 code.py, I get :
2021-05-31 16:41:59.329369: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.10.2
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
2021-05-31 16:42:06.500247: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /user/local/cuda-10.2/lib64:
2021-05-31 16:42:06.500494: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /user/local/cuda-10.2/lib64:
2021-05-31 16:42:06.500538: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:30] Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
2021-05-31 16:42:06.503361: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /user/local/cuda-10.2/lib64:
2021-05-31 16:42:06.503417: F tensorflow/compiler/tf2tensorrt/stub/nvinfer_stub.cc:49] getInferLibVersion symbol not found.
Aborted (core dumped)
The first messages are related to the cuda library and the fact that I use tensorflow and not tensorflow-gpu.
Do you have any advices ?
I also found some codes with many more lines, am I missing some elements?
Thank you by advance
Paul Griffoul