Why are there still so many trainable parameters even after freezing all the layers?

Please provide the following information when requesting support.
Please provide the following information when requesting support.

• Hardware (T4/V100/Xavier/Nano/etc)
NVIDIA RTX A5000

• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc)
YOLOv4 Object Detection with ResNet 18 Backbone

• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here)
Command ‘tlt’ not found, did you mean:

• Training spec file(If have, please share here)
Training spec (train.yaml)

random_seed: 42
yolov4_config {
  big_anchor_shape: "[(40, 30), (50, 25), (80, 50)]"
  mid_anchor_shape: "[(20,10), (30, 16), (40, 20)]"
  small_anchor_shape: "[(8, 5), (16, 8), (10, 20)]"
  box_matching_iou: 0.5
  matching_neutral_box_iou: 0.5
  arch: "resnet"
  nlayers: 18
  arch_conv_blocks: 2
  loss_loc_weight: 1.0
  loss_neg_obj_weights: 1.0
  loss_class_weights: 1.0
  label_smoothing: 0.0
  big_grid_xy_extend: 0.05
  mid_grid_xy_extend: 0.1
  small_grid_xy_extend: 0.2
  freeze_bn: true
  freeze_blocks: [0, 1, 2, 3, 4, 5, 6, 7]
  force_relu: false
}
training_config {
  batch_size_per_gpu: 4
  num_epochs: 100
  enable_qat: false
  checkpoint_interval: 3
  learning_rate {
    soft_start_cosine_annealing_schedule {
      min_learning_rate: 1e-8
      max_learning_rate: 1e-5
      soft_start: 0.3
    }
  }
  visualizer {
    clearml_config {
      project: "TAO Toolkit ClearML Demo"
      task: "YOLOV4"
      tags: "YOLOV4"
      tags: "training"
      tags: "resnet18"
      tags: "unpruned"
    },
    enabled: True
  }
  regularizer {
    type: L1
    weight: 3e-5
  }
  optimizer {
    adam {
      epsilon: 1e-7
      beta1: 0.9
      beta2: 0.999
      amsgrad: false
    }
  }
  # Make sure to change the path below to the model to resume training from
  #resume_model_path : "/workspace/tao-experiments/object_detection/models/pretrained_resnet18/pretrained_object_detection_vresnet18/resnet_18.hdf5"
  pretrain_model_path:  "/workspace/tao-experiments/object_detection/models/pretrained_resnet18/pretrained_object_detection_vresnet18/resnet_18.hdf5"
}
eval_config {
  average_precision_mode: INTEGRATE
  batch_size: 8
  matching_iou_threshold: 0.5
}
nms_config {
  confidence_threshold: 0.001
  clustering_iou_threshold: 0.5
  force_on_cpu: false
  top_k: 200
}
augmentation_config {
  hue: 0.1
  saturation: 1.5
  exposure:1.5
  vertical_flip:0.1
  horizontal_flip: 0.5
  jitter: 0.4
  output_width: 512
  output_height: 512
  output_channel: 3
  randomize_input_shape_period: 3
  mosaic_prob: 0.2
  mosaic_min_ratio:0.1
}
dataset_config {
  data_sources: {
      tfrecords_path: "/workspace/tao-experiments/object_detection/data/train/tfrecords/train*"
      image_directory_path: "/workspace/tao-experiments/object_detection/data/train/"
  }
  include_difficult_in_training: true
  image_extension: "jpg"
  target_class_mapping {
      key: "aeroplane"
      value: "aeroplane"
  }
  validation_data_sources: {
      tfrecords_path: "/workspace/tao-experiments/object_detection/data/val/tfrecords/val*"
      image_directory_path: "/workspace/tao-experiments/object_detection/data/val/"
  }
}

This leaves the network structure (truncated below)


2024-01-30 14:36:49,231 [TAO Toolkit] [WARNING] tensorflow 137: From /usr/local/lib/python3.8/dist-packages/keras/backend/tensorflow_backend.py:973: The name tf.assign is deprecated. Please use tf.compat.v1.assign instead.

__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to
==================================================================================================
Input (InputLayer)              (None, 3, 512, 512)  0
__________________________________________________________________________________________________
Input_qdq (QDQ)                 (None, 3, 512, 512)  1           Input[0][0]
__________________________________________________________________________________________________
conv1 (QuantizedConv2D)         (None, 64, 256, 256) 9408        Input_qdq[0][0]
__________________________________________________________________________________________________
bn_conv1 (BatchNormalization)   (None, 64, 256, 256) 256         conv1[0][0]
__________________________________________________________________________________________________
activation_2 (ReLU)             (None, 64, 256, 256) 0           bn_conv1[0][0]
__________________________________________________________________________________________________
activation_2_qdq (QDQ)          (None, 64, 256, 256) 1           activation_2[0][0]
__________________________________________________________________________________________________
block_1a_conv_1 (QuantizedConv2 (None, 64, 128, 128) 36864       activation_2_qdq[0][0]
__________________________________________________________________________________________________
block_1a_bn_1 (BatchNormalizati (None, 64, 128, 128) 256         block_1a_conv_1[0][0]
__________________________________________________________________________________________________
block_1a_relu_1 (ReLU)          (None, 64, 128, 128) 0           block_1a_bn_1[0][0]
__________________________________________________________________________________________________
block_1a_relu_1_qdq (QDQ)       (None, 64, 128, 128) 1           block_1a_relu_1[0][0]



....




encoded_sm (Concatenate)        (None, 12288, 12)    0           sm_anchor[0][0]
                                                                 sm_bbox_processor[0][0]
__________________________________________________________________________________________________
encoded_detections (Concatenate (None, 16128, 12)    0           encoded_bg[0][0]
                                                                 encoded_md[0][0]
                                                                 encoded_sm[0][0]
__________________________________________________________________________________________________
decoded_predictions (YOLOv4Deco (None, 16128, 5)     0           encoded_detections[0][0]
__________________________________________________________________________________________________
NMS (NMSLayer)                  (None, 200, 6)       0           decoded_predictions[0][0]
==================================================================================================
Total params: 34,824,270
Trainable params: 34,790,646
Non-trainable params: 33,624
__________________________________________________________________________________________________

I have frozen the Batch Norm and all the blocks, why are basically all of the parameters trainable?

Thanks!

I cannot reproduce the result.
May I know where did you download the resnet_18.hdf5?

I use

ngc registry model download-version nvidia/tao/pretrained_object_detection:resnet18
CLI_VERSION: Latest - 3.37.0 available (current: 3.22.0). Please update by using the command 'ngc version upgrade'

Downloaded 82.38 MB in 8s, Download speed: 10.28 MB/s
----------------------------------------------------------------------------------------------------------
   Transfer id: pretrained_object_detection_vresnet18
   Download status: Completed
   Downloaded local path: /home/rowden/github/nvidia-tao-exploration/pretrained_object_detection_vresnet18
   Total files downloaded: 1
   Total downloaded size: 82.38 MB
   Started at: 2024-02-01 09:14:33.994581
   Completed at: 2024-02-01 09:14:42.006398
   Duration taken: 8s
----------------------------------------------------------------------------------------------------------

It would be nice to know what result you are getting also?

As an update the params are now as follows:

==================================================================================================
Total params: 34,824,182
Trainable params: 23,271,478
Non-trainable params: 11,552,704
__________________________________________________________________________________________________

I’m not sure what has changed, is this now expected, this seems a still abnormally large amount of trainable params

You can refer to my log.
20240131_forum_280677_log.txt (59.8 KB)

The main difference is the model structure in the summary log. The hdf5 file has not qdq node. Not sure why your log has it.
Maybe you load another model instead.
From your latest log, it is expected. You can calculate the params# column.

Ok well, its nice to know we get the same numbers now.

Is there any chance you can explain why the trainable params are so high?

My understanding would be that if I freeze all those blocks the remaining trainables would come from the 3x3 Conv just after the Input layer and the Fully Connected layer.

Are there really that many parameters in those two layers?

Sorry is there any more information on this?

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

You can try to double check with tao_tensorflow1_backend/nvidia_tao_tf1/core/templates/resnet.py at main · NVIDIA/tao_tensorflow1_backend · GitHub, tao_tensorflow1_backend/nvidia_tao_tf1/core/templates/resnet.py at main · NVIDIA/tao_tensorflow1_backend · GitHub and tao_tensorflow1_backend/nvidia_tao_tf1/core/templates/resnet.py at main · NVIDIA/tao_tensorflow1_backend · GitHub

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.