Cannot convert FasterRCNN TLT model to trt engine

Hello everyone,

i’m trying to convert a trained faster_RCNN_vgg16 model for tensorrt (by making a .engine model).

I have trained my model by using transfer learning toolkit jupyter notebook example.

Everythings is ok until i try to export the encoded TLT model with the following command:

!tlt-converter -k $KEY  \
               -d 3,640,832 \
               -o dense_class/Softmax,dense_regress/BiasAdd,proposal \
               -e $USER_EXPERIMENT_DIR/data/faster_rcnn/trt.engine \
               -t fp16 \
               -i nchw \
               /workspace/training/faster_rcnn/exp/model/frcnn_vgg16.etlt \

This is the logs I got:

[libprotobuf WARNING google/protobuf/io/coded_stream.cc:604] Reading dangerously large protocol message.  If the message turns out to be larger than 2147483647 bytes, parsing will be halted for security reasons.  To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
[INFO] UFFParser: parsing input_1
[INFO] UFFParser: Applying order forwarding to: input_1
[INFO] UFFParser: parsing block1_conv1/kernel
[INFO] UFFParser: Applying order forwarding to: block1_conv1/kernel
[INFO] UFFParser: parsing block1_conv1/convolution
[INFO] UFFParser: Applying order forwarding to: block1_conv1/convolution
[INFO] UFFParser: parsing block1_conv1/bias
[INFO] UFFParser: Applying order forwarding to: block1_conv1/bias
[INFO] UFFParser: parsing block1_conv1/BiasAdd
[INFO] UFFParser: Applying order forwarding to: block1_conv1/BiasAdd
[INFO] UFFParser: parsing block1_conv1/Relu
[INFO] UFFParser: Applying order forwarding to: block1_conv1/Relu
[INFO] UFFParser: parsing block1_conv2/kernel
[INFO] UFFParser: Applying order forwarding to: block1_conv2/kernel
[INFO] UFFParser: parsing block1_conv2/convolution
[INFO] UFFParser: Applying order forwarding to: block1_conv2/convolution
[INFO] UFFParser: parsing block1_conv2/bias
[INFO] UFFParser: Applying order forwarding to: block1_conv2/bias
[INFO] UFFParser: parsing block1_conv2/BiasAdd
[INFO] UFFParser: Applying order forwarding to: block1_conv2/BiasAdd
[INFO] UFFParser: parsing block1_conv2/Relu
[INFO] UFFParser: Applying order forwarding to: block1_conv2/Relu
[INFO] UFFParser: parsing block1_pool/MaxPool
[INFO] UFFParser: Applying order forwarding to: block1_pool/MaxPool
[INFO] UFFParser: parsing block2_conv1/kernel
[INFO] UFFParser: Applying order forwarding to: block2_conv1/kernel
[INFO] UFFParser: parsing block2_conv1/convolution
[INFO] UFFParser: Applying order forwarding to: block2_conv1/convolution
[INFO] UFFParser: parsing block2_conv1/bias
[INFO] UFFParser: Applying order forwarding to: block2_conv1/bias
[INFO] UFFParser: parsing block2_conv1/BiasAdd
[INFO] UFFParser: Applying order forwarding to: block2_conv1/BiasAdd
[INFO] UFFParser: parsing block2_conv1/Relu
[INFO] UFFParser: Applying order forwarding to: block2_conv1/Relu
[INFO] UFFParser: parsing block2_conv2/kernel
[INFO] UFFParser: Applying order forwarding to: block2_conv2/kernel
[INFO] UFFParser: parsing block2_conv2/convolution
[INFO] UFFParser: Applying order forwarding to: block2_conv2/convolution
[INFO] UFFParser: parsing block2_conv2/bias
[INFO] UFFParser: Applying order forwarding to: block2_conv2/bias
[INFO] UFFParser: parsing block2_conv2/BiasAdd
[INFO] UFFParser: Applying order forwarding to: block2_conv2/BiasAdd
[INFO] UFFParser: parsing block2_conv2/Relu
[INFO] UFFParser: Applying order forwarding to: block2_conv2/Relu
[INFO] UFFParser: parsing block2_pool/MaxPool
[INFO] UFFParser: Applying order forwarding to: block2_pool/MaxPool
[INFO] UFFParser: parsing block3_conv1/kernel
[INFO] UFFParser: Applying order forwarding to: block3_conv1/kernel
[INFO] UFFParser: parsing block3_conv1/convolution
[INFO] UFFParser: Applying order forwarding to: block3_conv1/convolution
[INFO] UFFParser: parsing block3_conv1/bias
[INFO] UFFParser: Applying order forwarding to: block3_conv1/bias
[INFO] UFFParser: parsing block3_conv1/BiasAdd
[INFO] UFFParser: Applying order forwarding to: block3_conv1/BiasAdd
[INFO] UFFParser: parsing block3_conv1/Relu
[INFO] UFFParser: Applying order forwarding to: block3_conv1/Relu
[INFO] UFFParser: parsing block3_conv2/kernel
[INFO] UFFParser: Applying order forwarding to: block3_conv2/kernel
[INFO] UFFParser: parsing block3_conv2/convolution
[INFO] UFFParser: Applying order forwarding to: block3_conv2/convolution
[INFO] UFFParser: parsing block3_conv2/bias
[INFO] UFFParser: Applying order forwarding to: block3_conv2/bias
[INFO] UFFParser: parsing block3_conv2/BiasAdd
[INFO] UFFParser: Applying order forwarding to: block3_conv2/BiasAdd
[INFO] UFFParser: parsing block3_conv2/Relu
[INFO] UFFParser: Applying order forwarding to: block3_conv2/Relu
[INFO] UFFParser: parsing block3_conv3/kernel
[INFO] UFFParser: Applying order forwarding to: block3_conv3/kernel
[INFO] UFFParser: parsing block3_conv3/convolution
[INFO] UFFParser: Applying order forwarding to: block3_conv3/convolution
[INFO] UFFParser: parsing block3_conv3/bias
[INFO] UFFParser: Applying order forwarding to: block3_conv3/bias
[INFO] UFFParser: parsing block3_conv3/BiasAdd
[INFO] UFFParser: Applying order forwarding to: block3_conv3/BiasAdd
[INFO] UFFParser: parsing block3_conv3/Relu
[INFO] UFFParser: Applying order forwarding to: block3_conv3/Relu
[INFO] UFFParser: parsing block3_pool/MaxPool
[INFO] UFFParser: Applying order forwarding to: block3_pool/MaxPool
[INFO] UFFParser: parsing block4_conv1/kernel
[INFO] UFFParser: Applying order forwarding to: block4_conv1/kernel
[INFO] UFFParser: parsing block4_conv1/convolution
[INFO] UFFParser: Applying order forwarding to: block4_conv1/convolution
[INFO] UFFParser: parsing block4_conv1/bias
[INFO] UFFParser: Applying order forwarding to: block4_conv1/bias
[INFO] UFFParser: parsing block4_conv1/BiasAdd
[INFO] UFFParser: Applying order forwarding to: block4_conv1/BiasAdd
[INFO] UFFParser: parsing block4_conv1/Relu
[INFO] UFFParser: Applying order forwarding to: block4_conv1/Relu
[INFO] UFFParser: parsing block4_conv2/kernel
[INFO] UFFParser: Applying order forwarding to: block4_conv2/kernel
[INFO] UFFParser: parsing block4_conv2/convolution
[INFO] UFFParser: Applying order forwarding to: block4_conv2/convolution
[INFO] UFFParser: parsing block4_conv2/bias
[INFO] UFFParser: Applying order forwarding to: block4_conv2/bias
[INFO] UFFParser: parsing block4_conv2/BiasAdd
[INFO] UFFParser: Applying order forwarding to: block4_conv2/BiasAdd
[INFO] UFFParser: parsing block4_conv2/Relu
[INFO] UFFParser: Applying order forwarding to: block4_conv2/Relu
[INFO] UFFParser: parsing block4_conv3/kernel
[INFO] UFFParser: Applying order forwarding to: block4_conv3/kernel
[INFO] UFFParser: parsing block4_conv3/convolution
[INFO] UFFParser: Applying order forwarding to: block4_conv3/convolution
[INFO] UFFParser: parsing block4_conv3/bias
[INFO] UFFParser: Applying order forwarding to: block4_conv3/bias
[INFO] UFFParser: parsing block4_conv3/BiasAdd
[INFO] UFFParser: Applying order forwarding to: block4_conv3/BiasAdd
[INFO] UFFParser: parsing block4_conv3/Relu
[INFO] UFFParser: Applying order forwarding to: block4_conv3/Relu
[INFO] UFFParser: parsing block4_pool/MaxPool
[INFO] UFFParser: Applying order forwarding to: block4_pool/MaxPool
[INFO] UFFParser: parsing block5_conv1/kernel
[INFO] UFFParser: Applying order forwarding to: block5_conv1/kernel
[INFO] UFFParser: parsing block5_conv1/convolution
[INFO] UFFParser: Applying order forwarding to: block5_conv1/convolution
[INFO] UFFParser: parsing block5_conv1/bias
[INFO] UFFParser: Applying order forwarding to: block5_conv1/bias
[INFO] UFFParser: parsing block5_conv1/BiasAdd
[INFO] UFFParser: Applying order forwarding to: block5_conv1/BiasAdd
[INFO] UFFParser: parsing block5_conv1/Relu
[INFO] UFFParser: Applying order forwarding to: block5_conv1/Relu
[INFO] UFFParser: parsing block5_conv2/kernel
[INFO] UFFParser: Applying order forwarding to: block5_conv2/kernel
[INFO] UFFParser: parsing block5_conv2/convolution
[INFO] UFFParser: Applying order forwarding to: block5_conv2/convolution
[INFO] UFFParser: parsing block5_conv2/bias
[INFO] UFFParser: Applying order forwarding to: block5_conv2/bias
[INFO] UFFParser: parsing block5_conv2/BiasAdd
[INFO] UFFParser: Applying order forwarding to: block5_conv2/BiasAdd
[INFO] UFFParser: parsing block5_conv2/Relu
[INFO] UFFParser: Applying order forwarding to: block5_conv2/Relu
[INFO] UFFParser: parsing block5_conv3/kernel
[INFO] UFFParser: Applying order forwarding to: block5_conv3/kernel
[INFO] UFFParser: parsing block5_conv3/convolution
[INFO] UFFParser: Applying order forwarding to: block5_conv3/convolution
[INFO] UFFParser: parsing block5_conv3/bias
[INFO] UFFParser: Applying order forwarding to: block5_conv3/bias
[INFO] UFFParser: parsing block5_conv3/BiasAdd
[INFO] UFFParser: Applying order forwarding to: block5_conv3/BiasAdd
[INFO] UFFParser: parsing block5_conv3/Relu
[INFO] UFFParser: Applying order forwarding to: block5_conv3/Relu
[INFO] UFFParser: parsing rpn_conv1/kernel
[INFO] UFFParser: Applying order forwarding to: rpn_conv1/kernel
[INFO] UFFParser: parsing rpn_conv1/convolution

[INFO] UFFParser: Applying order forwarding to: rpn_conv1/convolution
[INFO] UFFParser: parsing rpn_conv1/bias
[INFO] UFFParser: Applying order forwarding to: rpn_conv1/bias
[INFO] UFFParser: parsing rpn_conv1/BiasAdd
[INFO] UFFParser: Applying order forwarding to: rpn_conv1/BiasAdd
[INFO] UFFParser: parsing rpn_conv1/Relu
[INFO] UFFParser: Applying order forwarding to: rpn_conv1/Relu
[INFO] UFFParser: parsing rpn_out_class/kernel
[INFO] UFFParser: Applying order forwarding to: rpn_out_class/kernel
[INFO] UFFParser: parsing rpn_out_class/convolution
[INFO] UFFParser: Applying order forwarding to: rpn_out_class/convolution
[INFO] UFFParser: parsing rpn_out_class/bias
[INFO] UFFParser: Applying order forwarding to: rpn_out_class/bias
[INFO] UFFParser: parsing rpn_out_class/BiasAdd
[INFO] UFFParser: Applying order forwarding to: rpn_out_class/BiasAdd
[INFO] UFFParser: parsing rpn_out_class/Sigmoid
[INFO] UFFParser: Applying order forwarding to: rpn_out_class/Sigmoid
[INFO] UFFParser: parsing rpn_out_regress/kernel
[INFO] UFFParser: Applying order forwarding to: rpn_out_regress/kernel
[INFO] UFFParser: parsing rpn_out_regress/convolution
[INFO] UFFParser: Applying order forwarding to: rpn_out_regress/convolution
[INFO] UFFParser: parsing rpn_out_regress/bias
[INFO] UFFParser: Applying order forwarding to: rpn_out_regress/bias
[INFO] UFFParser: parsing rpn_out_regress/BiasAdd
[INFO] UFFParser: Applying order forwarding to: rpn_out_regress/BiasAdd
[INFO] UFFParser: parsing proposal
[INFO] UFFParser: parsing dense_regress/bias
[INFO] UFFParser: Applying order forwarding to: dense_regress/bias
[INFO] UFFParser: parsing roi_pooling_conv_1/CropAndResize_new
[INFO] UFFParser: parsing classifier_pool/MaxPool
[INFO] UFFParser: Applying order forwarding to: classifier_pool/MaxPool
[INFO] UFFParser: parsing classifier_flatten/Reshape
[INFO] UFFParser: Applying order forwarding to: classifier_flatten/Reshape
[INFO] UFFParser: parsing fc1/kernel
[INFO] UFFParser: Applying order forwarding to: fc1/kernel
[INFO] UFFParser: parsing fc1/MatMul
[INFO] UFFParser: Inserting transposes for fc1/MatMul
[INFO] UFFParser: Applying order forwarding to: fc1/MatMul
[INFO] UFFParser: parsing fc1/bias
[INFO] UFFParser: Applying order forwarding to: fc1/bias
[INFO] UFFParser: parsing fc1/BiasAdd
[ERROR] fc1/MatMul: kernel weights has count 102760448 but 18874368 was expected
[ERROR] UffParser: Parser error: fc1/BiasAdd: The input to the Scale Layer is required to have a minimum of 3 dimensions.
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:81] The total number of bytes read was 546685567
[ERROR] Failed to parse uff model
[ERROR] Network must have at least one output
[ERROR] Unable to create engine
Segmentation fault (core dumped)

Some people seems to have resolve the same issue by changing the input dimension ordering to nhwc. In my case i still have the same error.

this is the spec file i use to train my model:

random_seed: 42
enc_key: "MYAPIKEY"
verbose: True
network_config {
input_image_config {
image_type: RGB
image_channel_order: 'rgb'
size_height_width {
height: 640
width: 832
}
    image_channel_mean {
        key: 'b'
        value: 103.939
}
    image_channel_mean {
        key: 'g'
        value: 116.779
}
    image_channel_mean {
        key: 'r'
        value: 123.68
}
    image_scaling_factor: 1.0
}
feature_extractor: "vgg"
anchor_box_config {
scale: 128.0
scale: 256.0
scale: 512.0
ratio: 1.0
ratio: 0.5
ratio: 2.0
}
freeze_bn: True
freeze_blocks: 1
freeze_blocks: 2
roi_mini_batch: 256
rpn_stride: 16
conv_bn_share_bias: True
roi_pooling_config {
pool_size: 7
pool_size_2x: True
}
}
training_config {
kitti_data_config {
images_dir : '/workspace/training/faster_rcnn/data/train/image'
labels_dir: '/workspace/training/faster_rcnn/data/train/label'
}
training_data_parser: 'raw_kitti'
data_augmentation {
use_augmentation: True
spatial_augmentation {
hflip_probability: 0.5
vflip_probability: 0.0
zoom_min: 1.0
zoom_max: 1.0
translate_max_x: 0
translate_max_y: 0
}
color_augmentation {
color_shift_stddev: 0.0
hue_rotation_max: 0.0
saturation_shift_max: 0.0
contrast_scale_max: 0.0
contrast_center: 0.5
}
}
num_epochs: 20
class_mapping {
key: 'person'
value: 0
}

class_mapping {
key: "background"
value: 1
}

pretrained_model: "/workspace/training/faster_rcnn/exp/export/model_2_pruned.tlt"
pretrained_weights: ""
output_weights: "/workspace/training/faster_rcnn/exp/weights/pruned/frcnn_vgg16.tltw"
output_model: "/workspace/training/faster_rcnn/exp/model/pruned/frcnn_vgg16.tlt"
rpn_min_overlap: 0.3
rpn_max_overlap: 0.7
classifier_min_overlap: 0.0
classifier_max_overlap: 0.5
gt_as_roi: False
std_scaling: 1.0
classifier_regr_std {
key: 'x'
value: 10.0
}
classifier_regr_std {
key: 'y'
value: 10.0
}
classifier_regr_std {
key: 'w'
value: 5.0
}
classifier_regr_std {
key: 'h'
value: 5.0
}

rpn_mini_batch: 256
rpn_pre_nms_top_N: 12000
rpn_nms_max_boxes: 2000
rpn_nms_overlap_threshold: 0.7
reg_config {
reg_type: 'L2'
weight_decay: 1e-4
}

optimizer {
adam {
lr: 0.00001
beta_1: 0.9
beta_2: 0.999
decay: 0.0
}
}

lr_scheduler {
step {
base_lr: 0.00001
gamma: 1.0
step_size: 30
}
}

lambda_rpn_regr: 1.0
lambda_rpn_class: 1.0
lambda_cls_regr: 1.0
lambda_cls_class: 1.0

inference_config {
images_dir: '/workspace/training/faster_rcnn/data/test/image'
model: '/workspace/training/faster_rcnn/exp/model/pruned/frcnn_vgg16.epoch20.tlt'
detection_image_output_dir: '/workspace/training/faster_rcnn/exp/inference/images'
labels_dump_dir: '/workspace/training/faster_rcnn/exp/inference/labels'
rpn_pre_nms_top_N: 6000
rpn_nms_max_boxes: 300
rpn_nms_overlap_threshold: 0.7
bbox_visualize_threshold: 0.6
classifier_nms_max_boxes: 300
classifier_nms_overlap_threshold: 0.3
}
evaluation_config {
dataset {
images_dir : '/workspace/training/faster_rcnn/data/test/image'
labels_dir: '/workspace/training/faster_rcnn/data/test/label'
}
data_parser: 'raw_kitti'
model: '/workspace/training/faster_rcnn/exp/model/pruned/frcnn_vgg16.epoch20.tlt'
labels_dump_dir: '/workspace/training/faster_rcnn/exp/inference/labels'
rpn_pre_nms_top_N: 6000
rpn_nms_max_boxes: 300
rpn_nms_overlap_threshold: 0.7
classifier_nms_max_boxes: 300
classifier_nms_overlap_threshold: 0.3
object_confidence_thres: 0.0001
use_voc07_11point_metric:True
}
}

The exportation to an etlt model seems to working well:

# Export in FP32 mode. \
!tlt-export -k $KEY \
                --export_module faster_rcnn \
                --outputs dense_class/Softmax,dense_regress/BiasAdd,proposal \
                --experiment_spec $SPECS_DIR/spec.txt\
                /workspace/training/faster_rcnn/exp/model/frcnn_vgg16.epoch50.tlt
Using TensorFlow backend.
2019-12-13 15:29:05,608 [INFO] iva.common.magnet_export: Loading model from /workspace/training/faster_rcnn/exp/model/frcnn_vgg16.epoch50.tlt
2019-12-13 15:29:05.609488: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-12-13 15:29:05.661471: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:998] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-12-13 15:29:05.662007: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x7a7c470 executing computations on platform CUDA. Devices:
2019-12-13 15:29:05.662024: I tensorflow/compiler/xla/service/service.cc:158]   StreamExecutor device (0): GeForce GTX TITAN X, Compute Capability 5.2
2019-12-13 15:29:05.683362: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3990430000 Hz
2019-12-13 15:29:05.683843: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x7ae6370 executing computations on platform Host. Devices:
2019-12-13 15:29:05.683876: I tensorflow/compiler/xla/service/service.cc:158]   StreamExecutor device (0): <undefined>, <undefined>
2019-12-13 15:29:05.684210: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties: 
name: GeForce GTX TITAN X major: 5 minor: 2 memoryClockRate(GHz): 1.19
pciBusID: 0000:01:00.0
totalMemory: 11.92GiB freeMemory: 10.98GiB
2019-12-13 15:29:05.684234: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0
2019-12-13 15:29:05.806139: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-12-13 15:29:05.806171: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990]      0 
2019-12-13 15:29:05.806178: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0:   N 
2019-12-13 15:29:05.806397: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10648 MB memory) -> physical GPU (device: 0, name: GeForce GTX TITAN X, pci bus id: 0000:01:00.0, compute capability: 5.2)
WARNING:tensorflow:From /usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
2019-12-13 15:29:14,995 [WARNING] tensorflow: From /usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
WARNING:tensorflow:From /usr/local/lib/python2.7/dist-packages/keras/backend/tensorflow_backend.py:3445: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.
Instructions for updating:
Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.
2019-12-13 15:29:15,156 [WARNING] tensorflow: From /usr/local/lib/python2.7/dist-packages/keras/backend/tensorflow_backend.py:3445: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.
Instructions for updating:
Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.
2019-12-13 15:29:26,652 [INFO] /usr/local/lib/python2.7/dist-packages/iva/faster_rcnn/spec_loader/spec_loader.pyc: Loading experiment spec at /workspace/training/faster_rcnn/specs/spec.txt.
2019-12-13 15:29:28.459865: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0
2019-12-13 15:29:28.459920: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-12-13 15:29:28.459930: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990]      0 
2019-12-13 15:29:28.459937: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0:   N 
2019-12-13 15:29:28.460129: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10648 MB memory) -> physical GPU (device: 0, name: GeForce GTX TITAN X, pci bus id: 0000:01:00.0, compute capability: 5.2)
WARNING:tensorflow:From /usr/local/lib/python2.7/dist-packages/tensorflow/python/tools/freeze_graph.py:249: __init__ (from tensorflow.python.platform.gfile) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.gfile.GFile.
2019-12-13 15:29:29,466 [WARNING] tensorflow: From /usr/local/lib/python2.7/dist-packages/tensorflow/python/tools/freeze_graph.py:249: __init__ (from tensorflow.python.platform.gfile) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.gfile.GFile.
WARNING:tensorflow:From /usr/local/lib/python2.7/dist-packages/tensorflow/python/tools/freeze_graph.py:127: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to check for files with this prefix.
2019-12-13 15:29:29,652 [WARNING] tensorflow: From /usr/local/lib/python2.7/dist-packages/tensorflow/python/tools/freeze_graph.py:127: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to check for files with this prefix.
2019-12-13 15:29:29.717801: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0
2019-12-13 15:29:29.717859: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-12-13 15:29:29.717878: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990]      0 
2019-12-13 15:29:29.717884: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0:   N 
2019-12-13 15:29:29.718063: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10648 MB memory) -> physical GPU (device: 0, name: GeForce GTX TITAN X, pci bus id: 0000:01:00.0, compute capability: 5.2)
INFO:tensorflow:Restoring parameters from /tmp/tmpyj3Q9k.ckpt
2019-12-13 15:29:29,754 [INFO] tensorflow: Restoring parameters from /tmp/tmpyj3Q9k.ckpt
WARNING:tensorflow:From /usr/local/lib/python2.7/dist-packages/tensorflow/python/tools/freeze_graph.py:232: convert_variables_to_constants (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.compat.v1.graph_util.convert_variables_to_constants
2019-12-13 15:29:29,984 [WARNING] tensorflow: From /usr/local/lib/python2.7/dist-packages/tensorflow/python/tools/freeze_graph.py:232: convert_variables_to_constants (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.compat.v1.graph_util.convert_variables_to_constants
WARNING:tensorflow:From /usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/graph_util_impl.py:245: extract_sub_graph (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.compat.v1.graph_util.extract_sub_graph
2019-12-13 15:29:29,984 [WARNING] tensorflow: From /usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/graph_util_impl.py:245: extract_sub_graph (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.compat.v1.graph_util.extract_sub_graph
INFO:tensorflow:Froze 40 variables.
2019-12-13 15:29:30,065 [INFO] tensorflow: Froze 40 variables.
INFO:tensorflow:Converted 40 variables to const ops.
2019-12-13 15:29:30,542 [INFO] tensorflow: Converted 40 variables to const ops.
WARNING: The version of TensorFlow installed on this system is not guaranteed to work with UFF.
Warning: No conversion function registered for layer: Proposal yet.
Converting proposal as custom op: Proposal

DEBUG: convert reshape to flatten node
Warning: No conversion function registered for layer: CropAndResize yet.
Converting roi_pooling_conv_1/CropAndResize_new as custom op: CropAndResize
2019-12-13 15:29:36,713 [INFO] iva.common.magnet_export: Converted model was saved into /workspace/training/faster_rcnn/exp/model/frcnn_vgg16.etlt
2019-12-13 15:29:36,713 [INFO] iva.common.magnet_export: Input node: input_1
2019-12-13 15:29:36,713 [INFO] iva.common.magnet_export: Output node(s): ['dense_class/Softmax', 'dense_regress/BiasAdd', 'proposal']

Thank you

Hi steventel,
I moved your topic into TLT forum since it is related to TLT.

Hi steventel,
Where did you run tlt-converter, inside the docker of your x86_64 host, or in the Jetson device?

Hi,

I run tlt-converter inside the docker of my x86_64 host, using the example jupyter notebook.

Could you try to generate trt engine via fp32 mode?
Your etlt model is fp32 mode.

I have already tried fp16 and fp32 mode with fp16 and fp32 etlt model. I have the same issue in every case.

Thank you for trying to help me

Could you please save your Jupyter notebook as an html file and then attach here? I want to check all the logs.

Hi Morganh,

You can download the html version of the jupyter notebook with the following link:

https://we.tl/t-1n4dHb9yaY

The prunning part is not important, in this notebook I’m trying to export an unprunned model:

Focus on the “4. Evaluate trained models” and the “9. Deploy!”.

The evaluation of the trained model and the tlt-export seem to work fine.

Thanks

Hi steventel,
Thanks for your info. I can reproduce with your step and vgg backbone.
Not sure it is related to vgg yet, at least it is not reproduced with resnet backbone.
We are checking internally. Will update information if there is any finding.

Ignore my previous comment.
I reproduce your error with below command which is missing channel number by mistake.

tlt-converter -k <my key>  -d <b>384,1248 </b>-o dense_class/Softmax,dense_regress/BiasAdd,proposal  -e vgg.engine -t fp32 -i nchw  vgg.etlt

If I add the channel number, the tlt-covnerter generate trt engine successfully.

tlt-converter -k <my key>  -d <b>3,384,1248 </b>-o dense_class/Softmax,dense_regress/BiasAdd,proposal  -e vgg.engine -t fp32 -i nchw  vgg.etlt

Please double check your running command.
BTW, I used KITTI dataset to train. You can try to use it to cross check too. Everything runs successfully with KITTI dataset until now.
Remember that set below if you plan to use KITTI dataset.

image_channel_order: 'bgr'

I find that you set below in spec for your own data.

image_channel_order: 'rgb'