Error deploying custom SSD model to deepstream

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson Xavier NX
• DeepStream Version 5.1
• JetPack Version (valid for Jetson only) 4.5.1
• TensorRT Version 7.1.3
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) Question

I’m trying to deploy a model trained (SSD) with TLT v3 in a deepstream pipeline. After training the model, I used inference tool from TLT on test images and the output is correct and what was expected. But when the model is deployed in deepstream, the output is very different and not correct, and also only one class is being inferenced.

Model Training Specs:

random_seed: 42
ssd_config {
  aspect_ratios_global: "[1.0, 2.0, 0.5, 3.0, 1.0/3.0]"
  scales: "[0.05, 0.1, 0.25, 0.4, 0.55, 0.7, 0.85]"
  two_boxes_for_ar1: true
  clip_boxes: false
  variances: "[0.1, 0.1, 0.2, 0.2]"
  arch: "resnet"
  nlayers: 18
  freeze_bn: false
  freeze_blocks: 0
}
training_config {
  batch_size_per_gpu: 16
  num_epochs: 160
  enable_qat: false
  learning_rate {
  soft_start_annealing_schedule {
    min_learning_rate: 5e-5
    max_learning_rate: 2e-2
    soft_start: 0.15
    annealing: 0.8
  }
}
  regularizer {
     type: L1
     weight: 3e-5
  }
}
eval_config {
  validation_period_during_training: 5
  average_precision_mode: SAMPLE
  batch_size: 16
  matching_iou_threshold: 0.5
}
nms_config {
  confidence_threshold: 0.01
  clustering_iou_threshold: 0.6
  top_k: 200
}
augmentation_config {
  output_width: 960
  output_height: 544
  output_channel: 3
}
dataset_config {
  data_sources: {
     label_directory_path: "dataset_masks_total/labels"
     image_directory_path: "dataset_masks_total/images"
  }
  validation_data_sources: {
    label_directory_path: "demo_mask_dataset/test1/labels"
    image_directory_path: "demo_mask_dataset/test1/test"
  }
  target_class_mapping {
    key: "mask"
    value: "mask"
  }
  target_class_mapping {
     key: "no-mask"
     value: "no-mask"
  }       
}

Deepstrem Model Configuration File:

[property]
gpu-id=0
net-scale-factor=1.0
offsets=103.939;116.779;123.68
model-color-format=1
labelfile-path=…/…/app/resources/models/maskdetection/labels_masknet.txt
#int8-calib-file=…/…/app/resources/models/maskdetection/model_mask1/cal.bin
tlt-encoded-model=…/…/app/resources/models/maskdetection/model_mask1/ssd_resnet18_epoch_100.etlt
tlt-model-key=tlt_encode
infer-dims=3;544;960
uff-input-order=0
uff-input-blob-name=Input
batch-size=1
##0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
num-detected-classes=2
interval=0
gie-unique-id=1
is-classifier=0
output-blob-names=NMS
parse-bbox-func-name=NvDsInferParseCustomSSDTLT
custom-lib-path=/opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_infercustomparser.so

[class-attrs-all]
threshold=0.2
pre-cluster-threshold=0.3
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0

I also wanted to use int8, but when using this network mode nothing is inferenced at all.

Hey, can you confirm if this postprocess parser can parse your model output?

Hi, I tried several parsers. First I tried the one mencioned in https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps/blob/master/configs/ssd_tlt/pgie_ssd_tlt_config.txt. This is parser is the one that TLT documentation says to use for custom SSD models. But when I use that one this error appears:

app-be_1 | python3.6: nvdsinfer_custombboxparser_tlt.cpp:81: bool NvDsInferParseCustomNMSTLT(const std::vector&, const NvDsInferNetworkInfo&, const NvDsInferParseDetectionParams&, std::vector&): Assertion `(int) det[1] < out_class_size’ failed.
app-be_1 | Aborted (core dumped)

When I tried this parser /opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_infercustomparser.so that error no longer appears.

What’s the class numbers of your model output, you can add some log inside the custom parser to check the output

I have two classes: mask and no-mask.
I added logs inside the custom parser to check values for det[1] and out_class_size. Both have value =2. In the labels file I have:

mask
no-mask

I also added this log:

std::cout << "id/label/conf/ x/y x/y – "
<< det[0] << " " << det[1] << " " << det[2] << " "
<< det[3] << " " << det[4] << " " << det[5] << " " << det[6] << std::endl;

and obtained this result:
id/label/conf/ x/y x/y – 0 2 0.529419 0 0.0169547 0.682143 0.939139

With logging on custom parser I figured out that the ids of classes where 1 and 2, instead of 0 and 1 (that was expected by the custom parser in deepstream_tlt_apps/pgie_ssd_tlt_config.txt at master · NVIDIA-AI-IOT/deepstream_tlt_apps · GitHub. I altered the code of the parser to expect ids 1 and 2. Now the error mencioned above does not appear anymore.

Also, I increased threshold and pre-cluster-threshold to eliminate some bboxes with low confidence.

Finally, I managed to use int8 with more representative images for calibration, when exporting the model in TLT.

Is that better to change the num-detected-classes=3 ?

Can you please share your custom parser. I am facing the exact same issue with SSD. I am sure that I have 15 classes but I keep getting : Assertion `(int) det[1] < out_class_size’ failed.

I tried this, it will pass the assertion, but it will produce wrong labels + one empty label (a box without text)

nvdsinfer_custombboxparser_tlt.cpp (7.1 KB)
Makefile (2.0 KB)
This is the code I used for custom parser. Use Makefile to compile this code before using it in the pipeline.

This works thanks.
I also had to change parse-bbox-func-name from NvDsInferParseCustomSSDTLT to NvDsInferParseCustomNMSTLT