My hardware is jetson tx2 and i installed Jetpack 3.2, tensorflow 1.9 completely.
I am converting Tensorflow model to TensorRT model.
I follow the sample:
here my code:
import tensorflow as tf
import tensorflow.contrib.tensorrt as trt
from tf_trt_models.detection import download_detection_model
config_path, checkpoint_path = download_detection_model(‘ssd_inception_v2_coco’)
from tf_trt_models.detection import build_detection_graph
frozen_graph, input_names, output_names = build_detection_graph(
trt_graph = trt.create_inference_graph(
max_workspace_size_bytes=1 << 25,
Import the TensorRT graph into a new graph and run:
output_node = tf.import_graph_def(
Here is my error:
2019-02-13 16:15:37.672887: I tensorflow/contrib/tensorrt/convert/convert_graph.cc:438] MULTIPLE tensorrt candidate conversion: 7
Segmentation fault (core dumped)
I checked some related topic on forum, but i realize it has not fixed because he did not run Tensorflow- tensorRT code anymore so i hope you can support me completely. Here are topic that is checked:
Beside, I follow code in ACCELERATING INFERENCE IN TENSORFLOW WITH TENSORRT, and it also fail but different error. But in this topic i want focus on the above error.
I have some questions. How can i know when i convert TF to TRT completely using this code? Will I get a real tensorRT model file in the directory? and how can i apply the tensorRT model (that I converted) to Deepstream SDK 1.5?
I hope you answer all my question.
Thank you so much.