Inference RPN post

Hello,

I am having a strange error (whom there is no an entry in the specs file).

Running a fasterrcnn training with the attached spec file, but I am getting this error:

    assert 0 < self.infer_rpn_post_nms_top_N < self.infer_rpn_pre_nms_top_N, '''
AssertionError: 
        Inference RPN post NMS should be positive and less than Inference RPN pre NMS top N
         got 1200(pre) and 0(post)

The documentation does not mention any rpn_pre_nms_top_N in the inference config!
Removing the inference config does not help either because it will read bot (pre and post) as 0 and therefore assert an error.

Anyone could help in this?

Thanks
specs.txt (4.3 KB)

For workaround, please set rpn_post_nms_top_N in the inference_config.
rpn_post_nms_top_N: xxx

And make sure the xxx is lower than rpn_pre_nms_top_N.

We will improve document and spec file as well.

I’ve already set the rpn_post_nms_top_N to xxx but the training docker does not recognize this argument!

Please ignore my previous comment. Please set rpn_nms_max_boxes in the inference_config.

Your spec file is missing this parameter. Also, please make sure it is lower than rpn_pre_nms_top_N.

Refer to FasterRCNN - NVIDIA Docs or https://github.com/NVIDIA/tao_tutorials/blob/main/notebooks/tao_launcher_starter_kit/faster_rcnn/specs/default_spec_resnet18.txt

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.