Peoplenet Inference

Hi all
can you please suggest how to do inferencing using unpruned resnet34_peoplenet.tlt file without training to see how it works for kitti dataset. Thanks in Adavance

If you do not want to train, I suggest you to use pruned resnet34_peoplenet.tlt to run inference.
See https://docs.nvidia.com/metropolis/TLT/tlt-getting-started-guide/index.html#intg_purpose_built_models

Or you can do inference with the tool “tlt-infer”, against the pruned peoplenet model.

  1. Firstly, modify the spec " detectnet_v2_inference_kitti_tlt.txt". The peoplenet model is trained with 3 classes: Person, Bag, Face. So you need to change “target_classes” and “classwise_bbox_handler_config” in the spec accordingly. And also need to change width/height because peoplenet model is trained with 960x544 dataset.
  2. Prepare some data for Person,Bag or Face. Make sure you resize them into 960x544.
  3. Set the $KEY to tlt_encode
  4. Run inference.
 # Running inference for detection on n images

!tlt-infer detectnet_v2 -e $SPECS_DIR/detectnet_v2_inference_kitti_tlt.txt
-o $USER_EXPERIMENT_DIR/tlt_infer_testing
-i $DATA_DOWNLOAD_DIR/testing/image_2
-k $KEY

Thanks @Morganh.

Hi @Morganh
Thanks for the reply. I am able to run inferences with unpruned model but as you suggest to run inferences with pruned model , i am not able to run inferences with pruned model because pruned model has an extension .etlt and when i am running inferencing it is showing Invalid model file extension.
so, is there any way to run inferences with pruned model.
and what is the difference between .tlt and .etlt.

So, please try to run tlt-infer with unpruned model.
The etlt model is an encrypted TLT file. During model export TLT model is encrypted with a private key. This key is required when you deploy this model for inference.

More, please ignore my previous comment. You need not to resize.

I am successfully able to run inference with unpruned model.

As you are saying, During model export TLT model is encrypted with a private key. so that means we cannot use resnet34_peoplenet_pruned.etlt OR is there any way to decrypt this file.

Hi,
Please download the unpruned peoplenet version. It is a tlt version.
See https://ngc.nvidia.com/catalog/models/nvidia:tlt_peoplenet/files?version=unpruned_v1.0

Hi @Morganh
But while training with my custom dataset i need to resize all the images and corresponding annotation with same image size?

Yes.
See https://docs.nvidia.com/metropolis/TLT/tlt-getting-started-guide/index.html#arch_specific_models
The tlt-train tool does not support training on images of multiple resolutions, or resizing images during training. All of the images must be resized offline to the final training size and the corresponding bounding boxes must be scaled accordingly.

Thanks @Morganh for reply.
Actually i have to train peoplenet with custom dataset .So can i make all images of dataset to 960*960 so that both width and height are multiple of 16?

And i have to train peoplenet model with multiple dataset so how to provide list of dataset paths in tfrecords_kitti_trainval.txt OR i have to combine all dataset in one directory and then give the path of that directory.

kitti_config {
root_directory_path: ["/home/ubuntu/dataset/coco_dataset_2014","/home/ubuntu/dataset/pascal_voc"]
image_dir_name: [“train2014”,“JPEG”]
label_dir_name: [“Annotations”,“Annotations”]
image_extension: “.jpg”
partition_mode: “random”
num_partitions: 2
val_split: 20
num_shards: 10
}
image_directory_path: ["/home/ubuntu/dataset/coco_dataset_2014","/home/ubuntu/dataset/pascal_voc"]

I provided multiple dataset like this but it is showing error.

1)Yes, you can resize to 960x960, or 960x544 or 640x480, … etc.
2) For your case, tlt-dataset-convert does not support multi dataset.
To solve your request, please generate the tfrecords for each dataset, then set it in the training spec as below.

 dataset_config {
     data_sources: {
     tfrecords_path: "<path to coco>"
     image_directory_path: "<path to coco image>"
     }
     data_sources: {
     tfrecords_path: "<path to voc>"
     image_directory_path: "<path to voc image>"
     }
    ...

Thanks @Morganh