tlt-infer not working with saved model

Hi Guys,

I am trying to run tlt-infer on an intermediate model saved during training. However, I am not able to run it. It gives me the following error:

Instructions for updating:
Colocations handled automatically by placer.
/usr/local/lib/python2.7/dist-packages/keras/engine/saving.py:292: UserWarning: No training configuration found in save file: the model was *not* compiled. Compile it manually.
  warnings.warn('No training configuration found in save file: '
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         (None, 3, 384, 1248)      0         
_________________________________________________________________
model_1 (Model)              [(None, 6, 24, 78), (None 4934238   
=================================================================
Total params: 4,934,238
Trainable params: 4,928,350
Non-trainable params: 5,888
_________________________________________________________________
2019-10-07 06:25:39,286 [INFO] iva.detectnet_v2.scripts.inference: Initialized model
2019-10-07 06:25:39,302 [INFO] iva.detectnet_v2.scripts.inference: Commencing inference
0it [00:05, ?it/s]
Traceback (most recent call last):
  File "/usr/local/bin/tlt-infer", line 10, in <module>
    sys.exit(main())
  File "./common/magnet_infer.py", line 35, in main
  File "./detectnet_v2/scripts/inference.py", line 222, in main
  File "./detectnet_v2/scripts/inference.py", line 185, in inference_wrapper_batch
  File "./detectnet_v2/postprocessor/bbox_handler.py", line 73, in bbox_preprocessing
  File "./detectnet_v2/postprocessor/bbox_handler.py", line 99, in abs_bbox_converter
  File "/usr/local/lib/python2.7/dist-packages/addict/addict.py", line 64, in __getitem__
    if name not in self:
TypeError: unhashable type

Please find the command that I am running below:

# Running inference for detection on n images
!tlt-infer detectnet_v2 -i $USER_EXPERIMENT_DIR/data/testing/image_2 \
                        -o $USER_EXPERIMENT_DIR/tlt_infer_testing \
                        -m $USER_EXPERIMENT_DIR/experiment_dir_unpruned/model.step-858650.tlt \
                        -cp $SPECS_DIR/detectnet_v2_clusterfile_kitti.json \
                        -k $KEY \
                        --kitti_dump \
                        -lw 3 \
                        -g 0 \
                        -bs 64

Please help me out.

Thanks.

Hi neophyte1,
Could you please paste your $SPECS_DIR/detectnet_v2_clusterfile_kitti.json?
More, could you run tlt-infer successfully against the output model instead of an intermediate model?

Hi neophyte1,
Please double check “target_class” of your $SPECS_DIR/detectnet_v2_clusterfile_kitti.json.
I can reproduce your issue if I delete one existing class or add one surplus class.

See [url]Integrating TAO Models into DeepStream — TAO Toolkit 3.22.05 documentation ,
target_classes: The list of classes the networks has been trained for. The order of the list must be the same as that during training.