0 trt_only nodes when using TF-TRT

Hi all,

I am currently trying to run the image_classification example from the TF-TRT repo: https://github.com/tensorflow/tensorrt.

However, when running with vgg_19 (or any other model) I am getting this output where I see 0 trt_only nodes.

batch_size: 8
cache: False
calib_data_dir: None
data_dir: None
default_models_dir: ./data
display_every: 100
engine_dir: None
max_workspace_size: 4294967296
minimum_segment_size: 2
mode: benchmark
model: vgg_19
model_dir: None
num_calib_inputs: 500
num_iterations: 100
num_warmup_iterations: 50
precision: FP16
target_duration: None
use_synthetic: True
use_trt: True
use_trt_dynamic_op: False
url: http://download.tensorflow.org/models/vgg_19_2016_08_28.tar.gz
num_nodes(native_tf): 144
num_nodes(tftrt_total): 110
num_nodes(trt_only): 0
graph_size(MB)(native_tf): 548.1
graph_size(MB)(trt): 548.1
time(s)(trt_conversion): 10.2
running inference...

I followed the standalone instructions for installation and I am not using the Nvidia docker. Here are my specs:

CentOS/RedHat 7: Springdale Linux 7.7
Tesla P100
Cuda 10.1
Cudnn 7.6.3
Python 3.6.8
TensorFlow 1.14
TensorRT 6.0.1.5

I am also having trouble with outputting log messages. Using the following command seems to have no effect

TF_CPP_MIN_LOG_LEVEL=2 python image_classification.py ...

Any help would be greatly appreciated!

Hi,

Below is the preffered command to generate the verbose log:
TF_CPP_VMODULE=segment=2,convert_graph=2,convert_nodes=2,trt_engine=1,trt_logger=2 python …

Could you please share the generated verbose log so we can better help?

Please refer to below link for other options:
https://docs.nvidia.com/deeplearning/frameworks/tf-trt-user-guide/index.html#debugging

Also, I will recommend you to use NGC to reduce any host-side dependencies.

Thanks