BatchNMS not found for Retinanet (and minor doc misstake)

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
RTX2080Ti
• DeepStream Version
5.0

To integrate Retinanet into DeepStream, the example config at https://docs.nvidia.com/metropolis/TLT/tlt-getting-started-guide/index.html#intg_retinanet_model is as follows:

[property]
gpu-id=0
net-scale-factor=1.0
offsets=103.939;116.779;123.68
model-color-format=1
labelfile-path=
tlt-encoded-model=
tlt-model-key=
uff-input-dims=3;384;1248;0
uff-input-blob-name=Input
batch-size=1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
num-detected-classes=3
interval=0
gie-unique-id=1
is-classifier=0
#network-type=0
output-blob-names=BatchedNMS
parse-bbox-func-name=NvDsInferParseCustomYOLOV3Uff
custom-lib-path=
[class-attrs-all]
threshold=0.3
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0

I think it should be something like:

output-blob-names=NMS
parse-bbox-func-name=NvDsInferParseCustomSSDTLT

Otherwise, I get

ERROR: …/nvdsinfer/nvdsinfer_func_utils.cpp:31 [TRT]: UffParser: Output error: Output BatchedNMS not found
parseModel: Failed to parse UFF model
ERROR: tlt/tlt_decode.cpp:274 failed to build network since parsing model errors.
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:797 Failed to create network using custom network creation function
ERROR: …/nvdsinfer/nvdsinfer_model_builder.cpp:862 Failed to get cuda engine from custom library API
0:00:02.231833457 9829 0x25e6d50 ERROR nvinfer gstnvinfer.cpp:596:gst_nvinfer_logger: NvDsInferContext[UID 3]: Error in NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1611> [UID = 3]: build engine file failed
Segmentation fault (core dumped)

Yes,there are issues in the doc.Should be

For SSD, DSSD, RetinaNet: NMS
For YOLOv3: BatchedNMS

Thanks for the confirmation. Marked as a solution. Would fancy a doc fix when you guys are available.