Description
hello, I want to convert my tensorflow model to tensorrt. but after use converter.convert(), there is a warring about not convert FusedBatchV3, resizebilinear. I check trt_graph node, TRTEngineOp num is 34. so i get 34 .plan file. I only want to know how can i get only one .plan file. Can you give me some advise.
Environment
TensorRT Version: 7.1.3.0
GPU Type: jetson Xavier
Nvidia Driver Version:
CUDA Version: 10.2
CUDNN Version: 8.0
Operating System + Version:
Python Version (if applicable):3.6
TensorFlow Version (if applicable):1.15
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):
Relevant Files
import tensorflow as tf
from tensorflow.python.compiler.tensorrt import trt_convert as trt
import numpy as np
from networks.model import *
import ipdb
import time
with tf.Session() as sess:
# First deserialize your frozen graph:
with tf.gfile.GFile("./tf_savemodel/detectionmodel.pb", 'rb') as f:
frozen_graph = tf.GraphDef()
frozen_graph.ParseFromString(f.read())
# Now you can create a TensorRT inference graph from your
# frozen graph:
converter = trt.TrtGraphConverter(
input_graph_def=frozen_graph,
nodes_blacklist=['Conv_39/BiasAdd'],
precision_mode = 'FP16',
) #output nodes
trt_graph = converter.convert()
with open('node2.txt','w') as node_f:
for mm in trt_graph.node:
node_f.write(mm.name+'\n')
print('finish convert')
node2.txt (5.9 KB)
Steps To Reproduce
Please include:
- Exact steps/commands to build your repro
- Exact steps/commands to run your repro
- Full traceback of errors encountered