FasterRCNN inference

If the inference config look like this:

inference_config {
images_dir: '/workspace/tao-experiments/data/testing/image_2'
model: '/workspace/tao-experiments/faster_rcnn/model-epoch-XX.hdf5'
batch_size: 1
detection_image_output_dir: '/workspace/tao-experiments/faster_rcnn/inference_results_imgs'
labels_dump_dir: '/workspace/tao-experiments/faster_rcnn/inference_dump_labels'
rpn_pre_nms_top_N: 6000
rpn_nms_max_boxes: 300
rpn_nms_overlap_threshold: 0.7
object_confidence_thres: 0.0001
bbox_visualize_threshold: 0.6
classifier_nms_max_boxes: 100
classifier_nms_overlap_threshold: 0.3
}

and the inference CLI look like this:

!tao model faster_rcnn inference --gpu_index $GPU_INDEX \
                -e $SPECS_DIR/default_spec_resnet50-1Class.txt \
                -m /workspace/tao-experiments/faster_rcnn/model-epoch-YY.hdf5

Which model is used for inference? is it model-epoch-XX or model-epoch-YY?

Refer to https://docs.nvidia.com/tao/tao-toolkit/text/object_detection/fasterrcnn.html#running-inference-on-the-model

The model path(if provided in the command line here) will override the inference_config.model in the spec file.

Thanks. Appreciate your input.
Best

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.