Error while running tao deformable_detr train

Please provide the following information when requesting support.

• Hardware (T4/V100/Xavier/Nano/etc) : dGPU (docker)
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc) : deformable_detr
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here) : nvcr.io/nvidia/tao/tao-toolkit:4.0.0-pyt
• Training spec file(If have, please share here)

dataset_config:
train_data_sources:
- image_dir: /host/home/jung/Data/coco128/images/train2017
json_file: /host/home/jung/Data/coco128/images/val.json
val_data_sources:
- image_dir: /host/home/jung/Data/coco128/images/train2017
json_file: /host/home/jung/Data/coco128/images/val.json
num_classes: 80
batch_size: 2
workers: 8
augmentation_config:
scales: [480, 512, 544, 576, 608, 640, 672, 704, 736, 768, 800]
input_mean: [0.485, 0.456, 0.406]
input_std: [0.229, 0.224, 0.225]
horizontal_flip_prob: 0.5
train_random_resize: [400, 500, 600]
train_random_crop_min: 384
train_random_crop_max: 600
random_resize_max_size: 1333
test_random_resize: 800
model_config:
pretrained_backbone_path: /host/home/jung/Workspace/tao/pretrained_object_detection_vresnet50/resnet_50.hdf5
backbone: resnet50
train_backbone: True
num_feature_levels: 4
dec_layers: 6
enc_layers: 6
num_queries: 300
with_box_refine: True
dropout_ratio: 0.3
train_config:
optim:
lr_backbone: 2e-5
lr: 2e-4
lr_steps: [10, 20, 30, 40]
momentum: 0.9
epochs: 50

• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)

I have pulled the docker image, the version mentioned above, and ran the following command.
deformable_detr train -e …/spec_file.yaml
and I get the following error.

INFO: Loading faiss with AVX2 support.
INFO: Could not load library with AVX2 support due to:
ModuleNotFoundError(“No module named ‘faiss.swigfaiss_avx2’”)
INFO: Loading faiss.
INFO: Successfully loaded faiss.
NOTE! Installing ujson may make loading annotations faster.
NOTE! Installing ujson may make loading annotations faster.
ANTLR runtime and generated code versions disagree: 4.8!=4.9.3
ANTLR runtime and generated code versions disagree: 4.8!=4.9.3
[NeMo W 2023-07-05 07:01:12 nemo_logging:349] :97: UserWarning:
‘spec_file.yaml’ is validated against ConfigStore schema with the same name.
This behavior is deprecated in Hydra 1.1 and will be removed in Hydra 1.2.
See https://hydra.cc/docs/next/upgrades/1.0_to_1.1/automatic_schema_matching for migration instructions.

Error executing job with overrides: [‘num_gpus=1’, ‘num_nodes=1’]
An error occurred during Hydra’s exception formatting:
AssertionError()
Traceback (most recent call last):
File “</opt/conda/lib/python3.8/site-packages/nvidia_tao_pytorch/cv/deformable_detr/scripts/train.py>”, line 3, in
File “”, line 97, in
File “”, line 99, in wrapper
File “/opt/conda/lib/python3.8/site-packages/hydra/_internal/utils.py”, line 377, in _run_hydra
run_and_report(
File “/opt/conda/lib/python3.8/site-packages/hydra/_internal/utils.py”, line 294, in run_and_report
raise ex
File “/opt/conda/lib/python3.8/site-packages/hydra/_internal/utils.py”, line 211, in run_and_report
return func()
File “/opt/conda/lib/python3.8/site-packages/hydra/_internal/utils.py”, line 378, in
lambda: hydra.run(
File “/opt/conda/lib/python3.8/site-packages/hydra/_internal/hydra.py”, line 111, in run
_ = ret.return_value
File “/opt/conda/lib/python3.8/site-packages/hydra/core/utils.py”, line 233, in return_value
raise self._return_value
File “/opt/conda/lib/python3.8/site-packages/hydra/core/utils.py”, line 160, in run_job
ret.return_value = task_function(task_cfg)
File “”, line 91, in main
File “/opt/conda/lib/python3.8/site-packages/omegaconf/dictconfig.py”, line 361, in getattr
self._format_and_raise(key=key, value=None, cause=e)
File “/opt/conda/lib/python3.8/site-packages/omegaconf/base.py”, line 231, in _format_and_raise
format_and_raise(
File “/opt/conda/lib/python3.8/site-packages/omegaconf/_utils.py”, line 873, in format_and_raise
_raise(ex, cause)
File “/opt/conda/lib/python3.8/site-packages/omegaconf/_utils.py”, line 771, in _raise
raise ex.with_traceback(sys.exc_info()[2]) # set env var OC_CAUSE=1 for full trace
File “/opt/conda/lib/python3.8/site-packages/omegaconf/dictconfig.py”, line 353, in getattr
return self._get_impl(
File “/opt/conda/lib/python3.8/site-packages/omegaconf/dictconfig.py”, line 453, in _get_impl
return self._resolve_with_default(
File “/opt/conda/lib/python3.8/site-packages/omegaconf/basecontainer.py”, line 96, in _resolve_with_default
raise MissingMandatoryValue(“Missing mandatory value: $FULL_KEY”)
omegaconf.errors.MissingMandatoryValue: Missing mandatory value: encryption_key
full_key: encryption_key
object_type=DDTrainExpConfig
Telemetry data couldn’t be sent, but the command ran successfully.
[Error]: <urlopen error [Errno -2] Name or service not known>
Execution status: FAIL

I tried to run the command with extra options
deformable_detr train -e …/spec_file.yaml --gpus 1 --num_nodes 1
but I still get the same error.

Since it is a docker container, I believe that all the environments should be set in place.

Did I forget some arguments in either the command line or the spec file?

Please add the key in the command line.

deformable_detr train [-h] -e <experiment_spec>
[-r <results_dir>]
[-k <key>]

what is the key option really for?
it says that ’ The encryption key to decrypt the model. This argument is only required with a .tlt model file.'.
Is the pretrained_backbone the model that it is trying to decrypt?
What should I put for key if I downloaded a pretrained backbone from ngc registry?

Any string is ok.
For example,
-k 123

ahhh sorry for bothering, but I can’t get it working.

I tried the command with a random string,
but I get the following error.

deformable_detr train -e ./spec_file.yaml -r ./results/ -k 123

INFO: Loading faiss with AVX2 support.
INFO: Could not load library with AVX2 support due to:
ModuleNotFoundError(“No module named ‘faiss.swigfaiss_avx2’”)
INFO: Loading faiss.
INFO: Successfully loaded faiss.
NOTE! Installing ujson may make loading annotations faster.
NOTE! Installing ujson may make loading annotations faster.
ANTLR runtime and generated code versions disagree: 4.8!=4.9.3
ANTLR runtime and generated code versions disagree: 4.8!=4.9.3
[NeMo W 2023-07-06 00:25:50 nemo_logging:349] :97: UserWarning:
‘spec_file.yaml’ is validated against ConfigStore schema with the same name.
This behavior is deprecated in Hydra 1.1 and will be removed in Hydra 1.2.
See https://hydra.cc/docs/next/upgrades/1.0_to_1.1/automatic_schema_matching for migration instructions.

[NeMo W 2023-07-06 00:25:50 nemo_logging:349] /opt/conda/lib/python3.8/site-packages/pytorch_lightning/core/datamodule.py:60: LightningDeprecationWarning: DataModule property train_transforms was deprecated in v1.5 and will be removed in v1.7.
rank_zero_deprecation(

Created a temporary directory at /tmp/tmp0kzs8g_w
Writing /tmp/tmp0kzs8g_w/_remote_module_non_scriptable.py
loading trained weights from /host/home/jung/Workspace/tao/pretrained_object_detection_vresnet50/resnet_50.hdf5
Error executing job with overrides: [‘output_dir=./results/’, ‘num_gpus=1’, ‘num_nodes=1’, ‘encryption_key=123’]
An error occurred during Hydra’s exception formatting:
AssertionError()
Traceback (most recent call last):
File “/opt/conda/lib/python3.8/site-packages/hydra/_internal/utils.py”, line 252, in run_and_report
assert mdl is not None
AssertionError

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “</opt/conda/lib/python3.8/site-packages/nvidia_tao_pytorch/cv/deformable_detr/scripts/train.py>”, line 3, in
File “”, line 97, in
File “”, line 99, in wrapper
File “/opt/conda/lib/python3.8/site-packages/hydra/_internal/utils.py”, line 377, in _run_hydra
run_and_report(
File “/opt/conda/lib/python3.8/site-packages/hydra/_internal/utils.py”, line 294, in run_and_report
raise ex
File “/opt/conda/lib/python3.8/site-packages/hydra/_internal/utils.py”, line 211, in run_and_report
return func()
File “/opt/conda/lib/python3.8/site-packages/hydra/_internal/utils.py”, line 378, in
lambda: hydra.run(
File “/opt/conda/lib/python3.8/site-packages/hydra/_internal/hydra.py”, line 111, in run
_ = ret.return_value
File “/opt/conda/lib/python3.8/site-packages/hydra/core/utils.py”, line 233, in return_value
raise self._return_value
File “/opt/conda/lib/python3.8/site-packages/hydra/core/utils.py”, line 160, in run_job
ret.return_value = task_function(task_cfg)
File “”, line 91, in main
File “”, line 40, in run_experiment
File “”, line 38, in init
File “”, line 42, in _build_model
File “”, line 130, in build_model
File “”, line 60, in init
File “”, line 147, in init
File “”, line 53, in load_pretrained_weights
File “/opt/conda/lib/python3.8/site-packages/torch/serialization.py”, line 735, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File “/opt/conda/lib/python3.8/site-packages/torch/serialization.py”, line 942, in _legacy_load
magic_number = pickle_module.load(f, **pickle_load_args)
_pickle.UnpicklingError: invalid load key, ‘H’.
Telemetry data couldn’t be sent, but the command ran successfully.
[Error]: <urlopen error [Errno -2] Name or service not known>
Execution status: FAIL

Can you change to the pretrained model and retry?
`wget ‘https://api.ngc.nvidia.com/v2/models/nvidia/tao/peoplenet_transformer/versions/trainable_v1.0/files/resnet50_peoplenet_transformer.tlt

Then, set to
pretrained_backbone_path: resnet50_peoplenet_transformer.tlt

The key is nvidia_tao

hmmm it seems like the backbone has somewhat different names for modules.
is it possible that the path isn’t right?
I have tried relative path, absolute path and the following.
pretrained_backbone_path: resnet50_peoplenet_transformer.tlt

INFO: Loading faiss with AVX2 support.
INFO: Could not load library with AVX2 support due to:
ModuleNotFoundError(“No module named ‘faiss.swigfaiss_avx2’”)
INFO: Loading faiss.
INFO: Successfully loaded faiss.
NOTE! Installing ujson may make loading annotations faster.
NOTE! Installing ujson may make loading annotations faster.
ANTLR runtime and generated code versions disagree: 4.8!=4.9.3
ANTLR runtime and generated code versions disagree: 4.8!=4.9.3
[NeMo W 2023-07-06 05:11:16 nemo_logging:349] :97: UserWarning:
‘spec_file.yaml’ is validated against ConfigStore schema with the same name.
This behavior is deprecated in Hydra 1.1 and will be removed in Hydra 1.2.
See https://hydra.cc/docs/next/upgrades/1.0_to_1.1/automatic_schema_matching for migration instructions.

[NeMo W 2023-07-06 05:11:16 nemo_logging:349] /opt/conda/lib/python3.8/site-packages/pytorch_lightning/core/datamodule.py:60: LightningDeprecationWarning: DataModule property train_transforms was deprecated in v1.5 and will be removed in v1.7.
rank_zero_deprecation(

Created a temporary directory at /tmp/tmpwsx5omt9
Writing /tmp/tmpwsx5omt9/_remote_module_non_scriptable.py
loading trained weights from resnet50_peoplenet_transformer.tlt
Error executing job with overrides: [‘output_dir=./results’, ‘num_gpus=1’, ‘num_nodes=1’, ‘encryption_key=nvidia_tao’]
An error occurred during Hydra’s exception formatting:
AssertionError()
Traceback (most recent call last):
File “/opt/conda/lib/python3.8/site-packages/hydra/_internal/utils.py”, line 252, in run_and_report
assert mdl is not None
AssertionError

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “</opt/conda/lib/python3.8/site-packages/nvidia_tao_pytorch/cv/deformable_detr/scripts/train.py>”, line 3, in
File “”, line 97, in
File “”, line 99, in wrapper
File “/opt/conda/lib/python3.8/site-packages/hydra/_internal/utils.py”, line 377, in _run_hydra
run_and_report(
File “/opt/conda/lib/python3.8/site-packages/hydra/_internal/utils.py”, line 294, in run_and_report
raise ex
File “/opt/conda/lib/python3.8/site-packages/hydra/_internal/utils.py”, line 211, in run_and_report
return func()
File “/opt/conda/lib/python3.8/site-packages/hydra/_internal/utils.py”, line 378, in
lambda: hydra.run(
File “/opt/conda/lib/python3.8/site-packages/hydra/_internal/hydra.py”, line 111, in run
_ = ret.return_value
File “/opt/conda/lib/python3.8/site-packages/hydra/core/utils.py”, line 233, in return_value
raise self._return_value
File “/opt/conda/lib/python3.8/site-packages/hydra/core/utils.py”, line 160, in run_job
ret.return_value = task_function(task_cfg)
File “”, line 91, in main
File “”, line 40, in run_experiment
File “”, line 38, in init
File “”, line 42, in _build_model
File “”, line 130, in build_model
File “”, line 60, in init
File “”, line 151, in init
File “”, line 235, in resnet50
File “/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py”, line 1660, in load_state_dict
raise RuntimeError(‘Error(s) in loading state_dict for {}:\n\t{}’.format(
RuntimeError: Error(s) in loading state_dict for ResNet:
Missing key(s) in state_dict: “conv1.weight”, “bn1.weight”, “bn1.bias”, “bn1.running_mean”, “bn1.running_var”, “layer1.0.conv1.weight”, “layer1.0.bn1.weight”, “layer1.0.bn1.bias”, “layer1.0.bn1.running_mean”, “layer1.0.bn1.running_var”, “layer1.0.conv2.weight”, “layer1.0.bn2.weight”, “layer1.0.bn2.bias”, “layer1.0.bn2.running_mean”, “layer1.0.bn2.running_var”, “layer1.0.conv3.weight”, “layer1.0.bn3.weight”, “layer1.0.bn3.bias”, “layer1.0.bn3.running_mean”, “layer1.0.bn3.running_var”, “layer1.0.downsample.0.weight”, “layer1.0.downsample.1.weight”, “layer1.0.downsample.1.bias”, “layer1.0.downsample.1.running_mean”, “layer1.0.downsample.1.running_var”, “layer1.1.conv1.weight”, “layer1.1.bn1.weight”, “layer1.1.bn1.bias”, “layer1.1.bn1.running_mean”, “layer1.1.bn1.running_var”, “layer1.1.conv2.weight”, “layer1.1.bn2.weight”, “layer1.1.bn2.bias”, “layer1.1.bn2.running_mean”, “layer1.1.bn2.running_var”, “layer1.1.conv3.weight”, “layer1.1.bn3.weight”, “layer1.1.bn3.bias”, “layer1.1.bn3.running_mean”, “layer1.1.bn3.running_var”, “layer1.2.conv1.weight”, “layer1.2.bn1.weight”, “layer1.2.bn1.bias”, “layer1.2.bn1.running_mean”, “layer1.2.bn1.running_var”, “layer1.2.conv2.weight”, “layer1.2.bn2.weight”, “layer1.2.bn2.bias”, “layer1.2.bn2.running_mean”, “layer1.2.bn2.running_var”, “layer1.2.conv3.weight”, “layer1.2.bn3.weight”, “layer1.2.bn3.bias”, “layer1.2.bn3.running_mean”, “layer1.2.bn3.running_var”, “layer2.0.conv1.weight”, “layer2.0.bn1.weight”, “layer2.0.bn1.bias”, “layer2.0.bn1.running_mean”, “layer2.0.bn1.running_var”, “layer2.0.conv2.weight”, “layer2.0.bn2.weight”, “layer2.0.bn2.bias”, “layer2.0.bn2.running_mean”, “layer2.0.bn2.running_var”, “layer2.0.conv3.weight”, “layer2.0.bn3.weight”, “layer2.0.bn3.bias”, “layer2.0.bn3.running_mean”, “layer2.0.bn3.running_var”, “layer2.0.downsample.0.weight”, “layer2.0.downsample.1.weight”, “layer2.0.downsample.1.bias”, “layer2.0.downsample.1.running_mean”, “layer2.0.downsample.1.running_var”, “layer2.1.conv1.weight”, “layer2.1.bn1.weight”, “layer2.1.bn1.bias”, “layer2.1.bn1.running_mean”, “layer2.1.bn1.running_var”, “layer2.1.conv2.weight”, “layer2.1.bn2.weight”, “layer2.1.bn2.bias”, “layer2.1.bn2.running_mean”, “layer2.1.bn2.running_var”, “layer2.1.conv3.weight”, “layer2.1.bn3.weight”, “layer2.1.bn3.bias”, “layer2.1.bn3.running_mean”, “layer2.1.bn3.running_var”, “layer2.2.conv1.weight”, “layer2.2.bn1.weight”, “layer2.2.bn1.bias”, “layer2.2.bn1.running_mean”, “layer2.2.bn1.running_var”, “layer2.2.conv2.weight”, “layer2.2.bn2.weight”, “layer2.2.bn2.bias”, “layer2.2.bn2.running_mean”, “layer2.2.bn2.running_var”, “layer2.2.conv3.weight”, “layer2.2.bn3.weight”, “layer2.2.bn3.bias”, “layer2.2.bn3.running_mean”, “layer2.2.bn3.running_var”, “layer2.3.conv1.weight”, “layer2.3.bn1.weight”, “layer2.3.bn1.bias”, “layer2.3.bn1.running_mean”, “layer2.3.bn1.running_var”, “layer2.3.conv2.weight”, “layer2.3.bn2.weight”, “layer2.3.bn2.bias”, “layer2.3.bn2.running_mean”, “layer2.3.bn2.running_var”, “layer2.3.conv3.weight”, “layer2.3.bn3.weight”, “layer2.3.bn3.bias”, “layer2.3.bn3.running_mean”, “layer2.3.bn3.running_var”, “layer3.0.conv1.weight”, “layer3.0.bn1.weight”, “layer3.0.bn1.bias”, “layer3.0.bn1.running_mean”, “layer3.0.bn1.running_var”, “layer3.0.conv2.weight”, “layer3.0.bn2.weight”, “layer3.0.bn2.bias”, “layer3.0.bn2.running_mean”, “layer3.0.bn2.running_var”, “layer3.0.conv3.weight”, “layer3.0.bn3.weight”, “layer3.0.bn3.bias”, “layer3.0.bn3.running_mean”, “layer3.0.bn3.running_var”, “layer3.0.downsample.0.weight”, “layer3.0.downsample.1.weight”, “layer3.0.downsample.1.bias”, “layer3.0.downsample.1.running_mean”, “layer3.0.downsample.1.running_var”, “layer3.1.conv1.weight”, “layer3.1.bn1.weight”, “layer3.1.bn1.bias”, “layer3.1.bn1.running_mean”, “layer3.1.bn1.running_var”, “layer3.1.conv2.weight”, “layer3.1.bn2.weight”, “layer3.1.bn2.bias”, “layer3.1.bn2.running_mean”, “layer3.1.bn2.running_var”, “layer3.1.conv3.weight”, “layer3.1.bn3.weight”, “layer3.1.bn3.bias”, “layer3.1.bn3.running_mean”, “layer3.1.bn3.running_var”, “layer3.2.conv1.weight”, “layer3.2.bn1.weight”, “layer3.2.bn1.bias”, “layer3.2.bn1.running_mean”, “layer3.2.bn1.running_var”, “layer3.2.conv2.weight”, “layer3.2.bn2.weight”, “layer3.2.bn2.bias”, “layer3.2.bn2.running_mean”, “layer3.2.bn2.running_var”, “layer3.2.conv3.weight”, “layer3.2.bn3.weight”, “layer3.2.bn3.bias”, “layer3.2.bn3.running_mean”, “layer3.2.bn3.running_var”, “layer3.3.conv1.weight”, “layer3.3.bn1.weight”, “layer3.3.bn1.bias”, “layer3.3.bn1.running_mean”, “layer3.3.bn1.running_var”, “layer3.3.conv2.weight”, “layer3.3.bn2.weight”, “layer3.3.bn2.bias”, “layer3.3.bn2.running_mean”, “layer3.3.bn2.running_var”, “layer3.3.conv3.weight”, “layer3.3.bn3.weight”, “layer3.3.bn3.bias”, “layer3.3.bn3.running_mean”, “layer3.3.bn3.running_var”, “layer3.4.conv1.weight”, “layer3.4.bn1.weight”, “layer3.4.bn1.bias”, “layer3.4.bn1.running_mean”, “layer3.4.bn1.running_var”, “layer3.4.conv2.weight”, “layer3.4.bn2.weight”, “layer3.4.bn2.bias”, “layer3.4.bn2.running_mean”, “layer3.4.bn2.running_var”, “layer3.4.conv3.weight”, “layer3.4.bn3.weight”, “layer3.4.bn3.bias”, “layer3.4.bn3.running_mean”, “layer3.4.bn3.running_var”, “layer3.5.conv1.weight”, “layer3.5.bn1.weight”, “layer3.5.bn1.bias”, “layer3.5.bn1.running_mean”, “layer3.5.bn1.running_var”, “layer3.5.conv2.weight”, “layer3.5.bn2.weight”, “layer3.5.bn2.bias”, “layer3.5.bn2.running_mean”, “layer3.5.bn2.running_var”, “layer3.5.conv3.weight”, “layer3.5.bn3.weight”, “layer3.5.bn3.bias”, “layer3.5.bn3.running_mean”, “layer3.5.bn3.running_var”, “layer4.0.conv1.weight”, “layer4.0.bn1.weight”, “layer4.0.bn1.bias”, “layer4.0.bn1.running_mean”, “layer4.0.bn1.running_var”, “layer4.0.conv2.weight”, “layer4.0.bn2.weight”, “layer4.0.bn2.bias”, “layer4.0.bn2.running_mean”, “layer4.0.bn2.running_var”, “layer4.0.conv3.weight”, “layer4.0.bn3.weight”, “layer4.0.bn3.bias”, “layer4.0.bn3.running_mean”, “layer4.0.bn3.running_var”, “layer4.0.downsample.0.weight”, “layer4.0.downsample.1.weight”, “layer4.0.downsample.1.bias”, “layer4.0.downsample.1.running_mean”, “layer4.0.downsample.1.running_var”, “layer4.1.conv1.weight”, “layer4.1.bn1.weight”, “layer4.1.bn1.bias”, “layer4.1.bn1.running_mean”, “layer4.1.bn1.running_var”, “layer4.1.conv2.weight”, “layer4.1.bn2.weight”, “layer4.1.bn2.bias”, “layer4.1.bn2.running_mean”, “layer4.1.bn2.running_var”, “layer4.1.conv3.weight”, “layer4.1.bn3.weight”, “layer4.1.bn3.bias”, “layer4.1.bn3.running_mean”, “layer4.1.bn3.running_var”, “layer4.2.conv1.weight”, “layer4.2.bn1.weight”, “layer4.2.bn1.bias”, “layer4.2.bn1.running_mean”, “layer4.2.bn1.running_var”, “layer4.2.conv2.weight”, “layer4.2.bn2.weight”, “layer4.2.bn2.bias”, “layer4.2.bn2.running_mean”, “layer4.2.bn2.running_var”, “layer4.2.conv3.weight”, “layer4.2.bn3.weight”, “layer4.2.bn3.bias”, “layer4.2.bn3.running_mean”, “layer4.2.bn3.running_var”, “fc.weight”, “fc.bias”.
Unexpected key(s) in state_dict: “model.model.transformer.level_embed”, “model.model.transformer.encoder.layers.0.self_attn.sampling_offsets.weight”, “model.model.transformer.encoder.layers.0.self_attn.sampling_offsets.bias”, “model.model.transformer.encoder.layers.0.self_attn.attention_weights.weight”, “model.model.transformer.encoder.layers.0.self_attn.attention_weights.bias”, “model.model.transformer.encoder.layers.0.self_attn.value_proj.weight”, “model.model.transformer.encoder.layers.0.self_attn.value_proj.bias”, “model.model.transformer.encoder.layers.0.self_attn.output_proj.weight”, “model.model.transformer.encoder.layers.0.self_attn.output_proj.bias”, “model.model.transformer.encoder.layers.0.norm1.weight”, “model.model.transformer.encoder.layers.0.norm1.bias”, “model.model.transformer.encoder.layers.0.linear1.weight”, “model.model.transformer.encoder.layers.0.linear1.bias”, “model.model.transformer.encoder.layers.0.linear2.weight”, “model.model.transformer.encoder.layers.0.linear2.bias”, “model.model.transformer.encoder.layers.0.norm2.weight”, “model.model.transformer.encoder.layers.0.norm2.bias”, “model.model.transformer.encoder.layers.1.self_attn.sampling_offsets.weight”, “model.model.transformer.encoder.layers.1.self_attn.sampling_offsets.bias”, “model.model.transformer.encoder.layers.1.self_attn.attention_weights.weight”, “model.model.transformer.encoder.layers.1.self_attn.attention_weights.bias”, “model.model.transformer.encoder.layers.1.self_attn.value_proj.weight”, “model.model.transformer.encoder.layers.1.self_attn.value_proj.bias”, “model.model.transformer.encoder.layers.1.self_attn.output_proj.weight”, “model.model.transformer.encoder.layers.1.self_attn.output_proj.bias”, “model.model.transformer.encoder.layers.1.norm1.weight”, “model.model.transformer.encoder.layers.1.norm1.bias”, “model.model.transformer.encoder.layers.1.linear1.weight”, “model.model.transformer.encoder.layers.1.linear1.bias”, “model.model.transformer.encoder.layers.1.linear2.weight”, “model.model.transformer.encoder.layers.1.linear2.bias”, “model.model.transformer.encoder.layers.1.norm2.weight”, “model.model.transformer.encoder.layers.1.norm2.bias”, “model.model.transformer.encoder.layers.2.self_attn.sampling_offsets.weight”, “model.model.transformer.encoder.layers.2.self_attn.sampling_offsets.bias”, “model.model.transformer.encoder.layers.2.self_attn.attention_weights.weight”, “model.model.transformer.encoder.layers.2.self_attn.attention_weights.bias”, “model.model.transformer.encoder.layers.2.self_attn.value_proj.weight”, “model.model.transformer.encoder.layers.2.self_attn.value_proj.bias”, “model.model.transformer.encoder.layers.2.self_attn.output_proj.weight”, “model.model.transformer.encoder.layers.2.self_attn.output_proj.bias”, “model.model.transformer.encoder.layers.2.norm1.weight”, “model.model.transformer.encoder.layers.2.norm1.bias”, “model.model.transformer.encoder.layers.2.linear1.weight”, “model.model.transformer.encoder.layers.2.linear1.bias”, “model.model.transformer.encoder.layers.2.linear2.weight”, “model.model.transformer.encoder.layers.2.linear2.bias”, “model.model.transformer.encoder.layers.2.norm2.weight”, “model.model.transformer.encoder.layers.2.norm2.bias”, “model.model.transformer.encoder.layers.3.self_attn.sampling_offsets.weight”, “model.model.transformer.encoder.layers.3.self_attn.sampling_offsets.bias”, “model.model.transformer.encoder.layers.3.self_attn.attention_weights.weight”, “model.model.transformer.encoder.layers.3.self_attn.attention_weights.bias”, “model.model.transformer.encoder.layers.3.self_attn.value_proj.weight”, “model.model.transformer.encoder.layers.3.self_attn.value_proj.bias”, “model.model.transformer.encoder.layers.3.self_attn.output_proj.weight”, “model.model.transformer.encoder.layers.3.self_attn.output_proj.bias”, “model.model.transformer.encoder.layers.3.norm1.weight”, “model.model.transformer.encoder.layers.3.norm1.bias”, “model.model.transformer.encoder.layers.3.linear1.weight”, “model.model.transformer.encoder.layers.3.linear1.bias”, “model.model.transformer.encoder.layers.3.linear2.weight”, “model.model.transformer.encoder.layers.3.linear2.bias”, “model.model.transformer.encoder.layers.3.norm2.weight”, “model.model.transformer.encoder.layers.3.norm2.bias”, “model.model.transformer.encoder.layers.4.self_attn.sampling_offsets.weight”, “model.model.transformer.encoder.layers.4.self_attn.sampling_offsets.bias”, “model.model.transformer.encoder.layers.4.self_attn.attention_weights.weight”, “model.model.transformer.encoder.layers.4.self_attn.attention_weights.bias”, “model.model.transformer.encoder.layers.4.self_attn.value_proj.weight”, “model.model.transformer.encoder.layers.4.self_attn.value_proj.bias”, “model.model.transformer.encoder.layers.4.self_attn.output_proj.weight”, “model.model.transformer.encoder.layers.4.self_attn.output_proj.bias”, “model.model.transformer.encoder.layers.4.norm1.weight”, “model.model.transformer.encoder.layers.4.norm1.bias”, “model.model.transformer.encoder.layers.4.linear1.weight”, “model.model.transformer.encoder.layers.4.linear1.bias”, “model.model.transformer.encoder.layers.4.linear2.weight”, “model.model.transformer.encoder.layers.4.linear2.bias”, “model.model.transformer.encoder.layers.4.norm2.weight”, “model.model.transformer.encoder.layers.4.norm2.bias”, “model.model.transformer.encoder.layers.5.self_attn.sampling_offsets.weight”, “model.model.transformer.encoder.layers.5.self_attn.sampling_offsets.bias”, “model.model.transformer.encoder.layers.5.self_attn.attention_weights.weight”, “model.model.transformer.encoder.layers.5.self_attn.attention_weights.bias”, “model.model.transformer.encoder.layers.5.self_attn.value_proj.weight”, “model.model.transformer.encoder.layers.5.self_attn.value_proj.bias”, “model.model.transformer.encoder.layers.5.self_attn.output_proj.weight”, “model.model.transformer.encoder.layers.5.self_attn.output_proj.bias”, “model.model.transformer.encoder.layers.5.norm1.weight”, “model.model.transformer.encoder.layers.5.norm1.bias”, “model.model.transformer.encoder.layers.5.linear1.weight”, “model.model.transformer.encoder.layers.5.linear1.bias”, “model.model.transformer.encoder.layers.5.linear2.weight”, “model.model.transformer.encoder.layers.5.linear2.bias”, “model.model.transformer.encoder.layers.5.norm2.weight”, “model.model.transformer.encoder.layers.5.norm2.bias”, “model.model.transformer.decoder.layers.0.cross_attn.sampling_offsets.weight”, “model.model.transformer.decoder.layers.0.cross_attn.sampling_offsets.bias”, “model.model.transformer.decoder.layers.0.cross_attn.attention_weights.weight”, “model.model.transformer.decoder.layers.0.cross_attn.attention_weights.bias”, “model.model.transformer.decoder.layers.0.cross_attn.value_proj.weight”, “model.model.transformer.decoder.layers.0.cross_attn.value_proj.bias”, “model.model.transformer.decoder.layers.0.cross_attn.output_proj.weight”, “model.model.transformer.decoder.layers.0.cross_attn.output_proj.bias”, “model.model.transformer.decoder.layers.0.norm1.weight”, “model.model.transformer.decoder.layers.0.norm1.bias”, “model.model.transformer.decoder.layers.0.self_attn.in_proj_weight”, “model.model.transformer.decoder.layers.0.self_attn.in_proj_bias”, “model.model.transformer.decoder.layers.0.self_attn.out_proj.weight”, “model.model.transformer.decoder.layers.0.self_attn.out_proj.bias”, “model.model.transformer.decoder.layers.0.norm2.weight”, “model.model.transformer.decoder.layers.0.norm2.bias”, “model.model.transformer.decoder.layers.0.linear1.weight”, “model.model.transformer.decoder.layers.0.linear1.bias”, “model.model.transformer.decoder.layers.0.linear2.weight”, “model.model.transformer.decoder.layers.0.linear2.bias”, “model.model.transformer.decoder.layers.0.norm3.weight”, “model.model.transformer.decoder.layers.0.norm3.bias”, “model.model.transformer.decoder.layers.1.cross_attn.sampling_offsets.weight”, “model.model.transformer.decoder.layers.1.cross_attn.sampling_offsets.bias”, “model.model.transformer.decoder.layers.1.cross_attn.attention_weights.weight”, “model.model.transformer.decoder.layers.1.cross_attn.attention_weights.bias”, “model.model.transformer.decoder.layers.1.cross_attn.value_proj.weight”, “model.model.transformer.decoder.layers.1.cross_attn.value_proj.bias”, “model.model.transformer.decoder.layers.1.cross_attn.output_proj.weight”, “model.model.transformer.decoder.layers.1.cross_attn.output_proj.bias”, “model.model.transformer.decoder.layers.1.norm1.weight”, “model.model.transformer.decoder.layers.1.norm1.bias”, “model.model.transformer.decoder.layers.1.self_attn.in_proj_weight”, “model.model.transformer.decoder.layers.1.self_attn.in_proj_bias”, “model.model.transformer.decoder.layers.1.self_attn.out_proj.weight”, “model.model.transformer.decoder.layers.1.self_attn.out_proj.bias”, “model.model.transformer.decoder.layers.1.norm2.weight”, “model.model.transformer.decoder.layers.1.norm2.bias”, “model.model.transformer.decoder.layers.1.linear1.weight”, “model.model.transformer.decoder.layers.1.linear1.bias”, “model.model.transformer.decoder.layers.1.linear2.weight”, “model.model.transformer.decoder.layers.1.linear2.bias”, “model.model.transformer.decoder.layers.1.norm3.weight”, “model.model.transformer.decoder.layers.1.norm3.bias”, “model.model.transformer.decoder.layers.2.cross_attn.sampling_offsets.weight”, “model.model.transformer.decoder.layers.2.cross_attn.sampling_offsets.bias”, “model.model.transformer.decoder.layers.2.cross_attn.attention_weights.weight”, “model.model.transformer.decoder.layers.2.cross_attn.attention_weights.bias”, “model.model.transformer.decoder.layers.2.cross_attn.value_proj.weight”, “model.model.transformer.decoder.layers.2.cross_attn.value_proj.bias”, “model.model.transformer.decoder.layers.2.cross_attn.output_proj.weight”, “model.model.transformer.decoder.layers.2.cross_attn.output_proj.bias”, “model.model.transformer.decoder.layers.2.norm1.weight”, “model.model.transformer.decoder.layers.2.norm1.bias”, “model.model.transformer.decoder.layers.2.self_attn.in_proj_weight”, “model.model.transformer.decoder.layers.2.self_attn.in_proj_bias”, “model.model.transformer.decoder.layers.2.self_attn.out_proj.weight”, “model.model.transformer.decoder.layers.2.self_attn.out_proj.bias”, “model.model.transformer.decoder.layers.2.norm2.weight”, “model.model.transformer.decoder.layers.2.norm2.bias”, “model.model.transformer.decoder.layers.2.linear1.weight”, “model.model.transformer.decoder.layers.2.linear1.bias”, “model.model.transformer.decoder.layers.2.linear2.weight”, “model.model.transformer.decoder.layers.2.linear2.bias”, “model.model.transformer.decoder.layers.2.norm3.weight”, “model.model.transformer.decoder.layers.2.norm3.bias”, “model.model.transformer.decoder.layers.3.cross_attn.sampling_offsets.weight”, “model.model.transformer.decoder.layers.3.cross_attn.sampling_offsets.bias”, “model.model.transformer.decoder.layers.3.cross_attn.attention_weights.weight”, “model.model.transformer.decoder.layers.3.cross_attn.attention_weights.bias”, “model.model.transformer.decoder.layers.3.cross_attn.value_proj.weight”, “model.model.transformer.decoder.layers.3.cross_attn.value_proj.bias”, “model.model.transformer.decoder.layers.3.cross_attn.output_proj.weight”, “model.model.transformer.decoder.layers.3.cross_attn.output_proj.bias”, “model.model.transformer.decoder.layers.3.norm1.weight”, “model.model.transformer.decoder.layers.3.norm1.bias”, “model.model.transformer.decoder.layers.3.self_attn.in_proj_weight”, “model.model.transformer.decoder.layers.3.self_attn.in_proj_bias”, “model.model.transformer.decoder.layers.3.self_attn.out_proj.weight”, “model.model.transformer.decoder.layers.3.self_attn.out_proj.bias”, “model.model.transformer.decoder.layers.3.norm2.weight”, “model.model.transformer.decoder.layers.3.norm2.bias”, “model.model.transformer.decoder.layers.3.linear1.weight”, “model.model.transformer.decoder.layers.3.linear1.bias”, “model.model.transformer.decoder.layers.3.linear2.weight”, “model.model.transformer.decoder.layers.3.linear2.bias”, “model.model.transformer.decoder.layers.3.norm3.weight”, “model.model.transformer.decoder.layers.3.norm3.bias”, “model.model.transformer.decoder.layers.4.cross_attn.sampling_offsets.weight”, “model.model.transformer.decoder.layers.4.cross_attn.sampling_offsets.bias”, “model.model.transformer.decoder.layers.4.cross_attn.attention_weights.weight”, “model.model.transformer.decoder.layers.4.cross_attn.attention_weights.bias”, “model.model.transformer.decoder.layers.4.cross_attn.value_proj.weight”, “model.model.transformer.decoder.layers.4.cross_attn.value_proj.bias”, “model.model.transformer.decoder.layers.4.cross_attn.output_proj.weight”, “model.model.transformer.decoder.layers.4.cross_attn.output_proj.bias”, “model.model.transformer.decoder.layers.4.norm1.weight”, “model.model.transformer.decoder.layers.4.norm1.bias”, “model.model.transformer.decoder.layers.4.self_attn.in_proj_weight”, “model.model.transformer.decoder.layers.4.self_attn.in_proj_bias”, “model.model.transformer.decoder.layers.4.self_attn.out_proj.weight”, “model.model.transformer.decoder.layers.4.self_attn.out_proj.bias”, “model.model.transformer.decoder.layers.4.norm2.weight”, “model.model.transformer.decoder.layers.4.norm2.bias”, “model.model.transformer.decoder.layers.4.linear1.weight”, “model.model.transformer.decoder.layers.4.linear1.bias”, “model.model.transformer.decoder.layers.4.linear2.weight”, “model.model.transformer.decoder.layers.4.linear2.bias”, “model.model.transformer.decoder.layers.4.norm3.weight”, “model.model.transformer.decoder.layers.4.norm3.bias”, “model.model.transformer.decoder.layers.5.cross_attn.sampling_offsets.weight”, “model.model.transformer.decoder.layers.5.cross_attn.sampling_offsets.bias”, “model.model.transformer.decoder.layers.5.cross_attn.attention_weights.weight”, “model.model.transformer.decoder.layers.5.cross_attn.attention_weights.bias”, “model.model.transformer.decoder.layers.5.cross_attn.value_proj.weight”, “model.model.transformer.decoder.layers.5.cross_attn.value_proj.bias”, “model.model.transformer.decoder.layers.5.cross_attn.output_proj.weight”, “model.model.transformer.decoder.layers.5.cross_attn.output_proj.bias”, “model.model.transformer.decoder.layers.5.norm1.weight”, “model.model.transformer.decoder.layers.5.norm1.bias”, “model.model.transformer.decoder.layers.5.self_attn.in_proj_weight”, “model.model.transformer.decoder.layers.5.self_attn.in_proj_bias”, “model.model.transformer.decoder.layers.5.self_attn.out_proj.weight”, “model.model.transformer.decoder.layers.5.self_attn.out_proj.bias”, “model.model.transformer.decoder.layers.5.norm2.weight”, “model.model.transformer.decoder.layers.5.norm2.bias”, “model.model.transformer.decoder.layers.5.linear1.weight”, “model.model.transformer.decoder.layers.5.linear1.bias”, “model.model.transformer.decoder.layers.5.linear2.weight”, “model.model.transformer.decoder.layers.5.linear2.bias”, “model.model.transformer.decoder.layers.5.norm3.weight”, “model.model.transformer.decoder.layers.5.norm3.bias”, “model.model.transformer.decoder.bbox_embed.0.layers.0.weight”, “model.model.transformer.decoder.bbox_embed.0.layers.0.bias”, “model.model.transformer.decoder.bbox_embed.0.layers.1.weight”, “model.model.transformer.decoder.bbox_embed.0.layers.1.bias”, “model.model.transformer.decoder.bbox_embed.0.layers.2.weight”, “model.model.transformer.decoder.bbox_embed.0.layers.2.bias”, “model.model.transformer.decoder.bbox_embed.1.layers.0.weight”, “model.model.transformer.decoder.bbox_embed.1.layers.0.bias”, “model.model.transformer.decoder.bbox_embed.1.layers.1.weight”, “model.model.transformer.decoder.bbox_embed.1.layers.1.bias”, “model.model.transformer.decoder.bbox_embed.1.layers.2.weight”, “model.model.transformer.decoder.bbox_embed.1.layers.2.bias”, “model.model.transformer.decoder.bbox_embed.2.layers.0.weight”, “model.model.transformer.decoder.bbox_embed.2.layers.0.bias”, “model.model.transformer.decoder.bbox_embed.2.layers.1.weight”, “model.model.transformer.decoder.bbox_embed.2.layers.1.bias”, “model.model.transformer.decoder.bbox_embed.2.layers.2.weight”, “model.model.transformer.decoder.bbox_embed.2.layers.2.bias”, “model.model.transformer.decoder.bbox_embed.3.layers.0.weight”, “model.model.transformer.decoder.bbox_embed.3.layers.0.bias”, “model.model.transformer.decoder.bbox_embed.3.layers.1.weight”, “model.model.transformer.decoder.bbox_embed.3.layers.1.bias”, “model.model.transformer.decoder.bbox_embed.3.layers.2.weight”, “model.model.transformer.decoder.bbox_embed.3.layers.2.bias”, “model.model.transformer.decoder.bbox_embed.4.layers.0.weight”, “model.model.transformer.decoder.bbox_embed.4.layers.0.bias”, “model.model.transformer.decoder.bbox_embed.4.layers.1.weight”, “model.model.transformer.decoder.bbox_embed.4.layers.1.bias”, “model.model.transformer.decoder.bbox_embed.4.layers.2.weight”, “model.model.transformer.decoder.bbox_embed.4.layers.2.bias”, “model.model.transformer.decoder.bbox_embed.5.layers.0.weight”, “model.model.transformer.decoder.bbox_embed.5.layers.0.bias”, “model.model.transformer.decoder.bbox_embed.5.layers.1.weight”, “model.model.transformer.decoder.bbox_embed.5.layers.1.bias”, “model.model.transformer.decoder.bbox_embed.5.layers.2.weight”, “model.model.transformer.decoder.bbox_embed.5.layers.2.bias”, “model.model.transformer.reference_points.weight”, “model.model.transformer.reference_points.bias”, “model.model.class_embed.0.weight”, “model.model.class_embed.0.bias”, “model.model.class_embed.1.weight”, “model.model.class_embed.1.bias”, “model.model.class_embed.2.weight”, “model.model.class_embed.2.bias”, “model.model.class_embed.3.weight”, “model.model.class_embed.3.bias”, “model.model.class_embed.4.weight”, “model.model.class_embed.4.bias”, “model.model.class_embed.5.weight”, “model.model.class_embed.5.bias”, “model.model.bbox_embed.0.layers.0.weight”, “model.model.bbox_embed.0.layers.0.bias”, “model.model.bbox_embed.0.layers.1.weight”, “model.model.bbox_embed.0.layers.1.bias”, “model.model.bbox_embed.0.layers.2.weight”, “model.model.bbox_embed.0.layers.2.bias”, “model.model.bbox_embed.1.layers.0.weight”, “model.model.bbox_embed.1.layers.0.bias”, “model.model.bbox_embed.1.layers.1.weight”, “model.model.bbox_embed.1.layers.1.bias”, “model.model.bbox_embed.1.layers.2.weight”, “model.model.bbox_embed.1.layers.2.bias”, “model.model.bbox_embed.2.layers.0.weight”, “model.model.bbox_embed.2.layers.0.bias”, “model.model.bbox_embed.2.layers.1.weight”, “model.model.bbox_embed.2.layers.1.bias”, “model.model.bbox_embed.2.layers.2.weight”, “model.model.bbox_embed.2.layers.2.bias”, “model.model.bbox_embed.3.layers.0.weight”, “model.model.bbox_embed.3.layers.0.bias”, “model.model.bbox_embed.3.layers.1.weight”, “model.model.bbox_embed.3.layers.1.bias”, “model.model.bbox_embed.3.layers.2.weight”, “model.model.bbox_embed.3.layers.2.bias”, “model.model.bbox_embed.4.layers.0.weight”, “model.model.bbox_embed.4.layers.0.bias”, “model.model.bbox_embed.4.layers.1.weight”, “model.model.bbox_embed.4.layers.1.bias”, “model.model.bbox_embed.4.layers.2.weight”, “model.model.bbox_embed.4.layers.2.bias”, “model.model.bbox_embed.5.layers.0.weight”, “model.model.bbox_embed.5.layers.0.bias”, “model.model.bbox_embed.5.layers.1.weight”, “model.model.bbox_embed.5.layers.1.bias”, “model.model.bbox_embed.5.layers.2.weight”, “model.model.bbox_embed.5.layers.2.bias”, “model.model.query_embed.weight”, “model.model.input_proj.0.0.weight”, “model.model.input_proj.0.0.bias”, “model.model.input_proj.0.1.weight”, “model.model.input_proj.0.1.bias”, “model.model.input_proj.1.0.weight”, “model.model.input_proj.1.0.bias”, “model.model.input_proj.1.1.weight”, “model.model.input_proj.1.1.bias”, “model.model.input_proj.2.0.weight”, “model.model.input_proj.2.0.bias”, “model.model.input_proj.2.1.weight”, “model.model.input_proj.2.1.bias”, “model.model.backbone.0.body.conv1.weight”, “model.model.backbone.0.body.bn1.weight”, “model.model.backbone.0.body.bn1.bias”, “model.model.backbone.0.body.bn1.running_mean”, “model.model.backbone.0.body.bn1.running_var”, “model.model.backbone.0.body.layer1.0.conv1.weight”, “model.model.backbone.0.body.layer1.0.bn1.weight”, “model.model.backbone.0.body.layer1.0.bn1.bias”, “model.model.backbone.0.body.layer1.0.bn1.running_mean”, “model.model.backbone.0.body.layer1.0.bn1.running_var”, “model.model.backbone.0.body.layer1.0.conv2.weight”, “model.model.backbone.0.body.layer1.0.bn2.weight”, “model.model.backbone.0.body.layer1.0.bn2.bias”, “model.model.backbone.0.body.layer1.0.bn2.running_mean”, “model.model.backbone.0.body.layer1.0.bn2.running_var”, “model.model.backbone.0.body.layer1.0.conv3.weight”, “model.model.backbone.0.body.layer1.0.bn3.weight”, “model.model.backbone.0.body.layer1.0.bn3.bias”, “model.model.backbone.0.body.layer1.0.bn3.running_mean”, “model.model.backbone.0.body.layer1.0.bn3.running_var”, “model.model.backbone.0.body.layer1.0.downsample.0.weight”, “model.model.backbone.0.body.layer1.0.downsample.1.weight”, “model.model.backbone.0.body.layer1.0.downsample.1.bias”, “model.model.backbone.0.body.layer1.0.downsample.1.running_mean”, “model.model.backbone.0.body.layer1.0.downsample.1.running_var”, “model.model.backbone.0.body.layer1.1.conv1.weight”, “model.model.backbone.0.body.layer1.1.bn1.weight”, “model.model.backbone.0.body.layer1.1.bn1.bias”, “model.model.backbone.0.body.layer1.1.bn1.running_mean”, “model.model.backbone.0.body.layer1.1.bn1.running_var”, “model.model.backbone.0.body.layer1.1.conv2.weight”, “model.model.backbone.0.body.layer1.1.bn2.weight”, “model.model.backbone.0.body.layer1.1.bn2.bias”, “model.model.backbone.0.body.layer1.1.bn2.running_mean”, “model.model.backbone.0.body.layer1.1.bn2.running_var”, “model.model.backbone.0.body.layer1.1.conv3.weight”, “model.model.backbone.0.body.layer1.1.bn3.weight”, “model.model.backbone.0.body.layer1.1.bn3.bias”, “model.model.backbone.0.body.layer1.1.bn3.running_mean”, “model.model.backbone.0.body.layer1.1.bn3.running_var”, “model.model.backbone.0.body.layer1.2.conv1.weight”, “model.model.backbone.0.body.layer1.2.bn1.weight”, “model.model.backbone.0.body.layer1.2.bn1.bias”, “model.model.backbone.0.body.layer1.2.bn1.running_mean”, “model.model.backbone.0.body.layer1.2.bn1.running_var”, “model.model.backbone.0.body.layer1.2.conv2.weight”, “model.model.backbone.0.body.layer1.2.bn2.weight”, “model.model.backbone.0.body.layer1.2.bn2.bias”, “model.model.backbone.0.body.layer1.2.bn2.running_mean”, “model.model.backbone.0.body.layer1.2.bn2.running_var”, “model.model.backbone.0.body.layer1.2.conv3.weight”, “model.model.backbone.0.body.layer1.2.bn3.weight”, “model.model.backbone.0.body.layer1.2.bn3.bias”, “model.model.backbone.0.body.layer1.2.bn3.running_mean”, “model.model.backbone.0.body.layer1.2.bn3.running_var”, “model.model.backbone.0.body.layer2.0.conv1.weight”, “model.model.backbone.0.body.layer2.0.bn1.weight”, “model.model.backbone.0.body.layer2.0.bn1.bias”, “model.model.backbone.0.body.layer2.0.bn1.running_mean”, “model.model.backbone.0.body.layer2.0.bn1.running_var”, “model.model.backbone.0.body.layer2.0.conv2.weight”, “model.model.backbone.0.body.layer2.0.bn2.weight”, “model.model.backbone.0.body.layer2.0.bn2.bias”, “model.model.backbone.0.body.layer2.0.bn2.running_mean”, “model.model.backbone.0.body.layer2.0.bn2.running_var”, “model.model.backbone.0.body.layer2.0.conv3.weight”, “model.model.backbone.0.body.layer2.0.bn3.weight”, “model.model.backbone.0.body.layer2.0.bn3.bias”, “model.model.backbone.0.body.layer2.0.bn3.running_mean”, “model.model.backbone.0.body.layer2.0.bn3.running_var”, “model.model.backbone.0.body.layer2.0.downsample.0.weight”, “model.model.backbone.0.body.layer2.0.downsample.1.weight”, “model.model.backbone.0.body.layer2.0.downsample.1.bias”, “model.model.backbone.0.body.layer2.0.downsample.1.running_mean”, “model.model.backbone.0.body.layer2.0.downsample.1.running_var”, “model.model.backbone.0.body.layer2.1.conv1.weight”, “model.model.backbone.0.body.layer2.1.bn1.weight”, “model.model.backbone.0.body.layer2.1.bn1.bias”, “model.model.backbone.0.body.layer2.1.bn1.running_mean”, “model.model.backbone.0.body.layer2.1.bn1.running_var”, “model.model.backbone.0.body.layer2.1.conv2.weight”, “model.model.backbone.0.body.layer2.1.bn2.weight”, “model.model.backbone.0.body.layer2.1.bn2.bias”, “model.model.backbone.0.body.layer2.1.bn2.running_mean”, “model.model.backbone.0.body.layer2.1.bn2.running_var”, “model.model.backbone.0.body.layer2.1.conv3.weight”, “model.model.backbone.0.body.layer2.1.bn3.weight”, “model.model.backbone.0.body.layer2.1.bn3.bias”, “model.model.backbone.0.body.layer2.1.bn3.running_mean”, “model.model.backbone.0.body.layer2.1.bn3.running_var”, “model.model.backbone.0.body.layer2.2.conv1.weight”, “model.model.backbone.0.body.layer2.2.bn1.weight”, “model.model.backbone.0.body.layer2.2.bn1.bias”, “model.model.backbone.0.body.layer2.2.bn1.running_mean”, “model.model.backbone.0.body.layer2.2.bn1.running_var”, “model.model.backbone.0.body.layer2.2.conv2.weight”, “model.model.backbone.0.body.layer2.2.bn2.weight”, “model.model.backbone.0.body.layer2.2.bn2.bias”, “model.model.backbone.0.body.layer2.2.bn2.running_mean”, “model.model.backbone.0.body.layer2.2.bn2.running_var”, “model.model.backbone.0.body.layer2.2.conv3.weight”, “model.model.backbone.0.body.layer2.2.bn3.weight”, “model.model.backbone.0.body.layer2.2.bn3.bias”, “model.model.backbone.0.body.layer2.2.bn3.running_mean”, “model.model.backbone.0.body.layer2.2.bn3.running_var”, “model.model.backbone.0.body.layer2.3.conv1.weight”, “model.model.backbone.0.body.layer2.3.bn1.weight”, “model.model.backbone.0.body.layer2.3.bn1.bias”, “model.model.backbone.0.body.layer2.3.bn1.running_mean”, “model.model.backbone.0.body.layer2.3.bn1.running_var”, “model.model.backbone.0.body.layer2.3.conv2.weight”, “model.model.backbone.0.body.layer2.3.bn2.weight”, “model.model.backbone.0.body.layer2.3.bn2.bias”, “model.model.backbone.0.body.layer2.3.bn2.running_mean”, “model.model.backbone.0.body.layer2.3.bn2.running_var”, “model.model.backbone.0.body.layer2.3.conv3.weight”, “model.model.backbone.0.body.layer2.3.bn3.weight”, “model.model.backbone.0.body.layer2.3.bn3.bias”, “model.model.backbone.0.body.layer2.3.bn3.running_mean”, “model.model.backbone.0.body.layer2.3.bn3.running_var”, “model.model.backbone.0.body.layer3.0.conv1.weight”, “model.model.backbone.0.body.layer3.0.bn1.weight”, “model.model.backbone.0.body.layer3.0.bn1.bias”, “model.model.backbone.0.body.layer3.0.bn1.running_mean”, “model.model.backbone.0.body.layer3.0.bn1.running_var”, “model.model.backbone.0.body.layer3.0.conv2.weight”, “model.model.backbone.0.body.layer3.0.bn2.weight”, “model.model.backbone.0.body.layer3.0.bn2.bias”, “model.model.backbone.0.body.layer3.0.bn2.running_mean”, “model.model.backbone.0.body.layer3.0.bn2.running_var”, “model.model.backbone.0.body.layer3.0.conv3.weight”, “model.model.backbone.0.body.layer3.0.bn3.weight”, “model.model.backbone.0.body.layer3.0.bn3.bias”, “model.model.backbone.0.body.layer3.0.bn3.running_mean”, “model.model.backbone.0.body.layer3.0.bn3.running_var”, “model.model.backbone.0.body.layer3.0.downsample.0.weight”, “model.model.backbone.0.body.layer3.0.downsample.1.weight”, “model.model.backbone.0.body.layer3.0.downsample.1.bias”, “model.model.backbone.0.body.layer3.0.downsample.1.running_mean”, “model.model.backbone.0.body.layer3.0.downsample.1.running_var”, “model.model.backbone.0.body.layer3.1.conv1.weight”, “model.model.backbone.0.body.layer3.1.bn1.weight”, “model.model.backbone.0.body.layer3.1.bn1.bias”, “model.model.backbone.0.body.layer3.1.bn1.running_mean”, “model.model.backbone.0.body.layer3.1.bn1.running_var”, “model.model.backbone.0.body.layer3.1.conv2.weight”, “model.model.backbone.0.body.layer3.1.bn2.weight”, “model.model.backbone.0.body.layer3.1.bn2.bias”, “model.model.backbone.0.body.layer3.1.bn2.running_mean”, “model.model.backbone.0.body.layer3.1.bn2.running_var”, “model.model.backbone.0.body.layer3.1.conv3.weight”, “model.model.backbone.0.body.layer3.1.bn3.weight”, “model.model.backbone.0.body.layer3.1.bn3.bias”, “model.model.backbone.0.body.layer3.1.bn3.running_mean”, “model.model.backbone.0.body.layer3.1.bn3.running_var”, “model.model.backbone.0.body.layer3.2.conv1.weight”, “model.model.backbone.0.body.layer3.2.bn1.weight”, “model.model.backbone.0.body.layer3.2.bn1.bias”, “model.model.backbone.0.body.layer3.2.bn1.running_mean”, “model.model.backbone.0.body.layer3.2.bn1.running_var”, “model.model.backbone.0.body.layer3.2.conv2.weight”, “model.model.backbone.0.body.layer3.2.bn2.weight”, “model.model.backbone.0.body.layer3.2.bn2.bias”, “model.model.backbone.0.body.layer3.2.bn2.running_mean”, “model.model.backbone.0.body.layer3.2.bn2.running_var”, “model.model.backbone.0.body.layer3.2.conv3.weight”, “model.model.backbone.0.body.layer3.2.bn3.weight”, “model.model.backbone.0.body.layer3.2.bn3.bias”, “model.model.backbone.0.body.layer3.2.bn3.running_mean”, “model.model.backbone.0.body.layer3.2.bn3.running_var”, “model.model.backbone.0.body.layer3.3.conv1.weight”, “model.model.backbone.0.body.layer3.3.bn1.weight”, “model.model.backbone.0.body.layer3.3.bn1.bias”, “model.model.backbone.0.body.layer3.3.bn1.running_mean”, “model.model.backbone.0.body.layer3.3.bn1.running_var”, “model.model.backbone.0.body.layer3.3.conv2.weight”, “model.model.backbone.0.body.layer3.3.bn2.weight”, “model.model.backbone.0.body.layer3.3.bn2.bias”, “model.model.backbone.0.body.layer3.3.bn2.running_mean”, “model.model.backbone.0.body.layer3.3.bn2.running_var”, “model.model.backbone.0.body.layer3.3.conv3.weight”, “model.model.backbone.0.body.layer3.3.bn3.weight”, “model.model.backbone.0.body.layer3.3.bn3.bias”, “model.model.backbone.0.body.layer3.3.bn3.running_mean”, “model.model.backbone.0.body.layer3.3.bn3.running_var”, “model.model.backbone.0.body.layer3.4.conv1.weight”, “model.model.backbone.0.body.layer3.4.bn1.weight”, “model.model.backbone.0.body.layer3.4.bn1.bias”, “model.model.backbone.0.body.layer3.4.bn1.running_mean”, “model.model.backbone.0.body.layer3.4.bn1.running_var”, “model.model.backbone.0.body.layer3.4.conv2.weight”, “model.model.backbone.0.body.layer3.4.bn2.weight”, “model.model.backbone.0.body.layer3.4.bn2.bias”, “model.model.backbone.0.body.layer3.4.bn2.running_mean”, “model.model.backbone.0.body.layer3.4.bn2.running_var”, “model.model.backbone.0.body.layer3.4.conv3.weight”, “model.model.backbone.0.body.layer3.4.bn3.weight”, “model.model.backbone.0.body.layer3.4.bn3.bias”, “model.model.backbone.0.body.layer3.4.bn3.running_mean”, “model.model.backbone.0.body.layer3.4.bn3.running_var”, “model.model.backbone.0.body.layer3.5.conv1.weight”, “model.model.backbone.0.body.layer3.5.bn1.weight”, “model.model.backbone.0.body.layer3.5.bn1.bias”, “model.model.backbone.0.body.layer3.5.bn1.running_mean”, “model.model.backbone.0.body.layer3.5.bn1.running_var”, “model.model.backbone.0.body.layer3.5.conv2.weight”, “model.model.backbone.0.body.layer3.5.bn2.weight”, “model.model.backbone.0.body.layer3.5.bn2.bias”, “model.model.backbone.0.body.layer3.5.bn2.running_mean”, “model.model.backbone.0.body.layer3.5.bn2.running_var”, “model.model.backbone.0.body.layer3.5.conv3.weight”, “model.model.backbone.0.body.layer3.5.bn3.weight”, “model.model.backbone.0.body.layer3.5.bn3.bias”, “model.model.backbone.0.body.layer3.5.bn3.running_mean”, “model.model.backbone.0.body.layer3.5.bn3.running_var”, “model.model.backbone.0.body.layer4.0.conv1.weight”, “model.model.backbone.0.body.layer4.0.bn1.weight”, “model.model.backbone.0.body.layer4.0.bn1.bias”, “model.model.backbone.0.body.layer4.0.bn1.running_mean”, “model.model.backbone.0.body.layer4.0.bn1.running_var”, “model.model.backbone.0.body.layer4.0.conv2.weight”, “model.model.backbone.0.body.layer4.0.bn2.weight”, “model.model.backbone.0.body.layer4.0.bn2.bias”, “model.model.backbone.0.body.layer4.0.bn2.running_mean”, “model.model.backbone.0.body.layer4.0.bn2.running_var”, “model.model.backbone.0.body.layer4.0.conv3.weight”, “model.model.backbone.0.body.layer4.0.bn3.weight”, “model.model.backbone.0.body.layer4.0.bn3.bias”, “model.model.backbone.0.body.layer4.0.bn3.running_mean”, “model.model.backbone.0.body.layer4.0.bn3.running_var”, “model.model.backbone.0.body.layer4.0.downsample.0.weight”, “model.model.backbone.0.body.layer4.0.downsample.1.weight”, “model.model.backbone.0.body.layer4.0.downsample.1.bias”, “model.model.backbone.0.body.layer4.0.downsample.1.running_mean”, “model.model.backbone.0.body.layer4.0.downsample.1.running_var”, “model.model.backbone.0.body.layer4.1.conv1.weight”, “model.model.backbone.0.body.layer4.1.bn1.weight”, “model.model.backbone.0.body.layer4.1.bn1.bias”, “model.model.backbone.0.body.layer4.1.bn1.running_mean”, “model.model.backbone.0.body.layer4.1.bn1.running_var”, “model.model.backbone.0.body.layer4.1.conv2.weight”, “model.model.backbone.0.body.layer4.1.bn2.weight”, “model.model.backbone.0.body.layer4.1.bn2.bias”, “model.model.backbone.0.body.layer4.1.bn2.running_mean”, “model.model.backbone.0.body.layer4.1.bn2.running_var”, “model.model.backbone.0.body.layer4.1.conv3.weight”, “model.model.backbone.0.body.layer4.1.bn3.weight”, “model.model.backbone.0.body.layer4.1.bn3.bias”, “model.model.backbone.0.body.layer4.1.bn3.running_mean”, “model.model.backbone.0.body.layer4.1.bn3.running_var”, “model.model.backbone.0.body.layer4.2.conv1.weight”, “model.model.backbone.0.body.layer4.2.bn1.weight”, “model.model.backbone.0.body.layer4.2.bn1.bias”, “model.model.backbone.0.body.layer4.2.bn1.running_mean”, “model.model.backbone.0.body.layer4.2.bn1.running_var”, “model.model.backbone.0.body.layer4.2.conv2.weight”, “model.model.backbone.0.body.layer4.2.bn2.weight”, “model.model.backbone.0.body.layer4.2.bn2.bias”, “model.model.backbone.0.body.layer4.2.bn2.running_mean”, “model.model.backbone.0.body.layer4.2.bn2.running_var”, “model.model.backbone.0.body.layer4.2.conv3.weight”, “model.model.backbone.0.body.layer4.2.bn3.weight”, “model.model.backbone.0.body.layer4.2.bn3.bias”, “model.model.backbone.0.body.layer4.2.bn3.running_mean”, “model.model.backbone.0.body.layer4.2.bn3.running_var”.
Telemetry data couldn’t be sent, but the command ran successfully.
[Error]: <urlopen error [Errno -2] Name or service not known>
Execution status: FAIL

oh btw,
were there extra steps after downloading pretrained backbone model from the ngc registry?
the file extension was hdf5, but the one from the link you gave me is in tlt.
Did I miss a couple of steps described in the document,
or is it generally not a good practice to download a model from ngc registry?

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

Could you change to below and run again? Thanks!

model_config:
pretrained_path:“resnet50_peoplenet_transformer.tlt”
backbone: resnet50
train_backbone: True
num_feature_levels: 2
dec_layers: 6
enc_layers: 6
num_queries: 300
with_box_refine: True
dropout_ratio: 0.3

The key is nvidia_tao

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.