ValueError: need more than 1 value to unpack

Hi,

thank you for the TLT general release! I learn a lot from the forum so thank u very much for the answers.

After downloading and installing the TLT docker image and preparing my dataset as per the instructions in the getting started guide, I want to ran a training cycle using the resnet50 model

I am fine-tuning the resnet50 model on a single class called ‘hand’.

However at the starting i got this error:

020-06-29 21:17:33,642 [INFO] /usr/local/lib/python2.7/dist-packages/iva/faster_rcnn/scripts/train.pyc: Pretrained model loaded!
Traceback (most recent call last):
File “/usr/local/bin/tlt-train-g1”, line 8, in
sys.exit(main())
File “./common/magnet_train.py”, line 30, in main
File “./faster_rcnn/scripts/train.py”, line 127, in main
ValueError: need more than 1 value to unpack

I use the command :

tlt-train faster_rcnn -e tlt-experiments/exp1/tlt-resnet-spec.txt

I am unable to start the training

Could you please share tlt-experiments/exp1/tlt-resnet-spec.txt?
Thanks.

Hi @Morganh,

Thank you for your quick reply!

The spec file is as following :

random_seed: 42
enc_key: “API”
verbose: True
network_config {
input_image_config {
image_type: RGB
image_channel_order: ‘bgr’
size_min {
min:600
}
image_channel_mean {
key: ‘b’
value: 103.939
}
image_channel_mean {
key: ‘g’
value: 116.779
}
image_channel_mean {
key: ‘r’
value: 123.68
}
image_scaling_factor: 1.0
}
feature_extractor: “resnet:50”
anchor_box_config {
scale: 128.0
scale: 256.0
scale: 512.0
ratio: 1.0
ratio: 0.5
ratio: 2.0
}
freeze_bn: True
freeze_blocks: 1
freeze_blocks: 2
roi_mini_batch: 256
rpn_stride: 16
conv_bn_share_bias: True
roi_pooling_config {
pool_size: 7
pool_size_2x: True
}
}
training_config {
kitti_data_config {
images_dir: ‘/workspace/datasets/images’
labels_dir: ‘/workspace/datasets/label’
}
training_data_parser: ‘raw_kitti’
data_augmentation {
use_augmentation: False
spatial_augmentation {
hflip_probability: 0.5
vflip_probability: 0.0
zoom_min: 1.0
zoom_max: 1.0
translate_max_x: 0
translate_max_y: 0
}
color_augmentation {
color_shift_stddev: 0.0
hue_rotation_max: 0.0
saturation_shift_max: 0.0
contrast_scale_max: 0.0
contrast_center: 0.5
}
}
num_epochs: 12

class_mapping {
key: ‘hand’
value: 0
}
class_mapping {
key: “background”
value: 1
}

pretrained_model: “/workspace/tlt-experiments/exp1/moels/resnet50.hdf5”
pretrained_weights: “/workspace/tlt-experiments/exp1/models/resnet50.h5”
output_weights: “/workspace/tlt-experiments/exp1/resnet50.tltw”
output_model: “/workspace/tlt-experiments/exp1/resnet50.tlt”
rpn_min_overlap: 0.3
rpn_max_overlap: 0.7
classifier_min_overlap: 0.0
classifier_max_overlap: 0.5
gt_as_roi: False
std_scaling: 1.0
classifier_regr_std {
key: ‘x’
value: 10.0
}
classifier_regr_std {
key: ‘y’
value: 10.0
}
classifier_regr_std {
key: ‘w’
value: 5.0
}
classifier_regr_std {
key: ‘h’
value: 5.0
}

rpn_mini_batch: 256
rpn_pre_nms_top_N: 12000
rpn_nms_max_boxes: 2000
rpn_nms_overlap_threshold: 0.7
reg_config {
reg_type: ‘L2’
weight_decay: 1e-4
}

optimizer {
adam {
lr: 0.00001
beta_1: 0.9
beta_2: 0.999
decay: 0.0
}
}

lr_scheduler {
step {
base_lr: 0.00001
gamma: 1.0
step_size: 30
}
}

lambda_rpn_regr: 1.0
lambda_rpn_class: 1.0
lambda_cls_regr: 1.0
lambda_cls_class: 1.0

inference_config {
images_dir: ‘/workspace/datasets/test’
model: ‘/workspace/tlt-experiments/exp1/resnet50.epoch12.tlt’
detection_image_output_dir: ‘/workspace/tlt-experiments/exp1/infer_results_imgs’
labels_dump_dir: ‘/workspace/tlt-experiments/exp1/infer_dump_labels’
rpn_pre_nms_top_N: 6000
rpn_nms_max_boxes: 300
rpn_nms_overlap_threshold: 0.7
bbox_visualize_threshold: 0.6
classifier_nms_max_boxes: 300
classifier_nms_overlap_threshold: 0.3
}
}

I dont understand what i am missing!!!

Which docker did you run? 1.0.1 or 2.0_dp?

Hi @Morganh

its 1.0.1 please.

Please use latest tlt 2.0_dp to try.
And for spec file, please refer to the spec file inside the docker firstly.

examles/faster_rcnn/spec

More, suggest you to run jupyter notebook firstly.

Hi @Morganh

Thank you so much for the direction. But i would really like to know why i cant use the docker 1.0.1 since i have used it before to train the model.

I don’t mean to offend you anyways as always follow your replies to all post and learn .

I will let y<ou know the results using 2.0_dp

With kindest regards

For 1.0.1 docker, I need to check your spec again. I am afraid there is something missing or wrong.

Could you please remove pretrained_model and retry?

For example, modify to as below.

pretrained_model: “”
pretrained_weights: “your-path/resnet50.h5”

1 Like

Hi @Morganh,

I just started the training with v2.0 with no problem at all. thank you very much for the direction. but i still wonder whats wrong in the v1.0.1 as i used it before and it was smooth.

in v2.0 the spec file different than v1.0.1.

I am using gtx950m with 4gb and the training is a bit slow, is there anyway it can be faster?

previously i was using tesla v4 16gb and it was too fast.

once again thank you for the support . much appriciated

Hi @Morganh

That was the problem. but how do u know that?

I am grateful for your support,

Now i am facing in v1.0.1 the following error :

Compressed_class_mapping: {u’background’: 2, u’hand’: 1}

Name mapping:{u’background’: u’background’, u’hand’: u’hand’}

Training dataset stats(compressed via class mapping):

{u’background’: 0, u’hand’: 1343}

Traceback (most recent call last):
File “/usr/local/bin/tlt-train-g1”, line 8, in
sys.exit(main())
File “./common/magnet_train.py”, line 30, in main
File “./faster_rcnn/scripts/train.py”, line 273, in main
File “./faster_rcnn/data_loader/loader.py”, line 200, in kitti_data_gen
UnboundLocalError: local variable ‘image_channel_order’ referenced before assignment

the spec file i have already put in previous post!
I wonder what is wrong in image channel!!
Thank you.

For your latest error, please refer to Traning FasterRCNN using Transfer Learning Toolkit - #7 by Morganh

1 Like

Hi @Morganh

That was the exact solution. when i set ‘use_augmentation’ to false the error comes up. when it is true the training starts.

Thank you so much