TLT learning, validation and training details

I have two quick questions.
First is about the evaluation of TLT models. Actually I couldnt find any way to save the outputs of the model by using tlt detectnet_v2 evaluate , for example for LPD network I know that there is a -r tag for saving the json file containing the mean average precision, average_precision, … values. However I was wondering if there is any tag for saving the bounding boxes or the cropped licence plate?

Second is about the training dataset. I was wondering if there is any information available about the training data that was used for the pre-trained models, like the location of the images, lighting and weather conditions, and number of the images?
Thanks in advance.

Please use tlt detectnet_v2 inference , the annotated images are in inference_output/images_annotated and labels are in inference_output/labels .

Please check this info in each model card. For example,
PeopleNet — Transfer Learning Toolkit 3.0 documentation
The datasheet for the model is captured in its model card hosted at NGC.

NVIDIA NGC

Thank you for the reply, I could run inference for the LPRnet, but have an issue with LPDnet. I fine-tuned the lpdnet like below:

$ tlt detectnet_v2 train -e /workspace/openalpr/SPECS_train.txt -r /workspace/openalpr/openlpd -k nvidia_tlt

Which generated the model at /workspace/openalpr/openlpd/weights/model.tlt, then used the below command for doing inference, but get an error, any idea?

$ tlt detectnet_v2  inference -i /workspace/openalpr/lpd/data/image/ -e /workspace/openalpr/SPECS_train.txt -m /workspace/openalpr/openlpd/weights/model.tlt -o /workspace/openalpr/lpd/data/infer/ -k nvidia_tlt  

2021-07-08 14:45:29,104 [INFO] root: Registry: ['nvcr.io']
Matplotlib created a temporary config/cache directory at /tmp/matplotlib-v7ylk7oo because the default path (/.config/matplotlib) is not a writable directory; it is highly recommended to set the MPLCONFIGDIR environment variable to a writable directory, in particular to speed up the import of Matplotlib and to better support multiprocessing.
Using TensorFlow backend.
usage: detectnet_v2 inference [-h] [--num_processes NUM_PROCESSES]
                              [--gpus GPUS]
                              [--gpu_index GPU_INDEX [GPU_INDEX ...]]
                              [--use_amp] [--log_file LOG_FILE] -e
                              INFERENCE_SPEC -i INFERENCE_INPUT -k KEY -o
                              INFERENCE_OUTPUT [-v]
                              {calibration_tensorfile,dataset_convert,evaluate,export,inference,prune,train}
                              ...
detectnet_v2 inference: error: invalid choice: '/workspace/openalpr/openlpd/weights/model.tlt' (choose from 'calibration_tensorfile', 'dataset_convert', 'evaluate', 'export', 'inference', 'prune', 'train')
2021-07-08 14:45:38,859 [INFO] tlt.components.docker_handler.docker_handler: Stopping container.

Need to set “-e” to inference spec file instead of training spec file.
Reference: DetectNet_v2 — Transfer Learning Toolkit 3.0 documentation and DetectNet_v2 — Transfer Learning Toolkit 3.0 documentation
Or you can download jupyter notebook to find related spec file. TLT Quick Start Guide — Transfer Learning Toolkit 3.0 documentation

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.