Tutorial Spec Error: Message type "RegularizerConfig" has no field named "reg_type"

I am getting an error when I am using spec file found in the tutorial:

google.protobuf.text_format.ParseError: 119:5 : Message type “RegularizerConfig” has no field named “reg_type”.

random_seed: 42
enc_key: ‘tlt’
verbose: True
network_config {
input_image_config {
image_type: RGB
image_channel_order: ‘bgr’
size_height_width {
height: 384
width: 1248
}
image_channel_mean {
key: ‘b’
value: 103.939
}
image_channel_mean {
key: ‘g’
value: 116.779
}
image_channel_mean {
key: ‘r’
value: 123.68
}
image_scaling_factor: 1.0
max_objects_num_per_image: 100
}
feature_extractor: “resnet:18”
anchor_box_config {
scale: 64.0
scale: 128.0
scale: 256.0
ratio: 1.0
ratio: 0.5
ratio: 2.0
}
freeze_bn: True
freeze_blocks: 0
freeze_blocks: 1
roi_mini_batch: 256
rpn_stride: 16
conv_bn_share_bias: False
roi_pooling_config {
pool_size: 7
pool_size_2x: False
}
all_projections: True
use_pooling:False
}
training_config {
kitti_data_config {
data_sources: {
tfrecords_path: “/data/tfrecords/*”
image_directory_path: “/data/”
}
image_extension: ‘jpeg’
target_class_mapping {
key: ‘bird’
value: ‘bird’
}
validation_fold: 0
}
data_augmentation {
preprocessing {
output_image_width: 1248
output_image_height: 384
output_image_channel: 3
min_bbox_width: 1.0
min_bbox_height: 1.0
}
spatial_augmentation {
hflip_probability: 0.5
vflip_probability: 0.0
zoom_min: 1.0
zoom_max: 1.0
translate_max_x: 0
translate_max_y: 0
}
color_augmentation {
hue_rotation_max: 0.0
saturation_shift_max: 0.0
contrast_scale_max: 0.0
contrast_center: 0.5
}
}
enable_augmentation: True
batch_size_per_gpu: 16
num_epochs: 12
pretrained_weights: “/data/tlt_resnet18_faster_rcnn_v1/resnet18.h5”
#resume_from_model: “/workspace/tlt-experiments/data/faster_rcnn/resnet18.epoch2.tlt”
#retrain_pruned_model: “/workspace/tlt-experiments/data/faster_rcnn/model_1_pruned.tlt”
output_model: “/data/frcnn-17/frcnn_kitti_resnet18.tlt”
rpn_min_overlap: 0.3
rpn_max_overlap: 0.7
classifier_min_overlap: 0.0
classifier_max_overlap: 0.5
gt_as_roi: False
std_scaling: 1.0
classifier_regr_std {
key: ‘x’
value: 10.0
}
classifier_regr_std {
key: ‘y’
value: 10.0
}
classifier_regr_std {
key: ‘w’
value: 5.0
}
classifier_regr_std {
key: ‘h’
value: 5.0
}
rpn_mini_batch: 256
rpn_pre_nms_top_N: 12000
rpn_nms_max_boxes: 2000
rpn_nms_overlap_threshold: 0.7
reg_config {
reg_type: ‘L2’
weight_decay: 1e-4
}
optimizer {
adam {
lr: 0.00001
beta_1: 0.9
beta_2: 0.999
decay: 0.0
}
}
lr_scheduler {
step {
base_lr: 0.00016
gamma: 1.0
step_size: 30
}
}
lambda_rpn_regr: 1.0
lambda_rpn_class: 1.0
lambda_cls_regr: 1.0
lambda_cls_class: 1.0
inference_config {
images_dir: ‘/data/images/’
model: ‘/data/frcnn-18/frcnn_kitti_resnet18.epoch12.tlt’
detection_image_output_dir: ‘data/frcnn-18/inference_results_imgs’
labels_dump_dir: ‘/data/frcnn-18/inference_dump_labels’
rpn_pre_nms_top_N: 6000
rpn_nms_max_boxes: 300
rpn_nms_overlap_threshold: 0.7
bbox_visualize_threshold: 0.6
classifier_nms_max_boxes: 300
classifier_nms_overlap_threshold: 0.3
}
evaluation_config {
model: ‘data/frcnn-18/frcnn_kitti_resnet18.epoch12.tlt’
labels_dump_dir: ‘/data/frcnn-18/test_dump_labels’
rpn_pre_nms_top_N: 6000
rpn_nms_max_boxes: 300
rpn_nms_overlap_threshold: 0.7
classifier_nms_max_boxes: 300
classifier_nms_overlap_threshold: 0.3
object_confidence_thres: 0.0001
use_voc07_11point_metric:False
}
}

How do I solve this?

tlt-train faster_rcnn -e ./train-rcnn18.txt -r ./trained -k KEY

Refer to the sample in https://docs.nvidia.com/metropolis/TLT/tlt-getting-started-guide/text/creating_experiment_spec.html#training-configuration

Try with below format.

regularizer {
type: L1
weight: 3e-5
}

@Morganh

I have been able to get some very good results training on FRCNN inference after training. I am having some issue being able to retrain the pruned model though. I bypassed pruning and attempted to use tlt-converter on my Jetson TX1. I was unable to convert the unpruned model to .engine due to workspace size issues. (FRCNN’s proposal is requesting 13gb of memory to prune) I am wondering if you can help me get the trained model running.

Sample outputs of inference in tlt:

Hi @Sneaky_Turtle
Can you share your command when you mentioned above?

1 Like

Please add “-w” option, for example, set “-w 100000000”

1 Like

I am currently reflashing the TX1. It had crashed when running DeepStream and won’t start again, I’m going to rebuild and will post commands and logs when I have finished with the flash.

TLT-Convert Trained Unpruned Model

user@tx1:/opt/nvidia/tlt-converter$ ./tlt-converter -k $KEY -d 3,512,512 -o dense_class_td/Softmax,dense_regress_td/BiasAdd,proposal -e /opt/nvidia/tlt-converter/unpruned-engine.engine -w 100000000 /media/user/SD/tlt-models/frcnn_resnet18.etlt
[INFO] Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
[INFO] Detected 1 inputs and 3 output network tensors.

TLT-Convert Pruned, not retrained.

./tlt-converter -k KEY -d 3,512,512 -o dense_class_td/Softmax,dense_regress_td/BiasAdd,proposal -w 100000000 /media/user/SD/tlt-models/frcnn_resnet18_pruned_notretrained.etlt [INFO] Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output. [INFO] Detected 1 inputs and 3 output network tensors. user@tx1:/opt/nvidia/tlt-converter ls
Readme.txt tlt-converter

Actually your trt engine is generated successfully.
But it is not saved.

Please run below to grant access. Then run tlt-converter again.
$ sudo chown -R user:user /opt/nvidia/

1 Like

@Morganh

Yep, that works!

DeepStream is running the engine. Now to tune the training and experiment. Hope you can find out why DetectNet is producing 0 map, I would like to compare it to the other models, especially since its bounding boxes are automatically parsed in DeepStream.