TRTEngineOp issue in tf-trt method

I was trying tf-trt methord . i receive 0 TRTEngineOp nodes !! what would be the reason?
I however get different number of nodes before and after tf-trt conversion.

GRAPH_PB_PATH=r'Frozen.pb'
	sess= tf.Session()
	with gfile.FastGFile(GRAPH_PB_PATH,'rb') as f:
		graph_def = tf.GraphDef()
		graph_def.ParseFromString(f.read())
	
		##################### TF - TRT CODE #############
		num_nodes = len(graph_def.node)
		graph_size = len(graph_def.SerializeToString())
		converter = trt.TrtGraphConverter(input_graph_def=graph_def,nodes_blacklist=['predictions_1']) #output node
		trt_graph = converter.convert()
		print("graph_size(MB)(native_tf): %.1f" % (float(graph_size)/(1<<20)))
		print("graph_size(MB)(trt): %.1f" %
		    (float(len(trt_graph.SerializeToString()))/(1<<20)))
		print("num_nodes(native_tf): %d" % num_nodes)
		print("num_nodes(tftrt_total): %d" % len(trt_graph.node))
		print("num_nodes(trt_only): %d" % len([1 for n in trt_graph.node if str(n.op)=='TRTEngineOp']))

configuration : linux, tensorflow 1.14 tensorrt 5 , Quadro M2000

maybe because a node in your subgraph is not supported by tensorrt.

@Marcel.gabriel what is the solution for this tf-trt method??

I tried TF-TRT, too. I had the same problem and didn’t get it solved. I didn’t even find a way to implement the custom layers in TF-TRT. I would recommend to convert your tensorflow model to uff and write some custom layers in tensorrt.

@Marcel-gabriel, thanks for the info and inputs.

i have implemented them using traditional UFF - and custom layer plugin C++ method. Ass suggested looks like currently there is no solution available on tf-trt method for these kind of models

Could you post the TF log for the conversion as explained here: Accelerating Inference In TF-TRT User Guide :: NVIDIA Deep Learning Frameworks Documentation