Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) Jetson Xavier NX
• DeepStream Version 5.1
• JetPack Version (valid for Jetson only) 4.5.1
• TensorRT Version 7.1.3
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) Question
I’m trying to deploy a model trained (SSD) with TLT v3 in a deepstream pipeline. After training the model, I used inference tool from TLT on test images and the output is correct and what was expected. But when the model is deployed in deepstream, the output is very different and not correct, and also only one class is being inferenced.
Model Training Specs:
random_seed: 42 ssd_config { aspect_ratios_global: "[1.0, 2.0, 0.5, 3.0, 1.0/3.0]" scales: "[0.05, 0.1, 0.25, 0.4, 0.55, 0.7, 0.85]" two_boxes_for_ar1: true clip_boxes: false variances: "[0.1, 0.1, 0.2, 0.2]" arch: "resnet" nlayers: 18 freeze_bn: false freeze_blocks: 0 } training_config { batch_size_per_gpu: 16 num_epochs: 160 enable_qat: false learning_rate { soft_start_annealing_schedule { min_learning_rate: 5e-5 max_learning_rate: 2e-2 soft_start: 0.15 annealing: 0.8 } } regularizer { type: L1 weight: 3e-5 } } eval_config { validation_period_during_training: 5 average_precision_mode: SAMPLE batch_size: 16 matching_iou_threshold: 0.5 } nms_config { confidence_threshold: 0.01 clustering_iou_threshold: 0.6 top_k: 200 } augmentation_config { output_width: 960 output_height: 544 output_channel: 3 } dataset_config { data_sources: { label_directory_path: "dataset_masks_total/labels" image_directory_path: "dataset_masks_total/images" } validation_data_sources: { label_directory_path: "demo_mask_dataset/test1/labels" image_directory_path: "demo_mask_dataset/test1/test" } target_class_mapping { key: "mask" value: "mask" } target_class_mapping { key: "no-mask" value: "no-mask" } }
Deepstrem Model Configuration File:
[property]
gpu-id=0
net-scale-factor=1.0
offsets=103.939;116.779;123.68
model-color-format=1
labelfile-path=…/…/app/resources/models/maskdetection/labels_masknet.txt
#int8-calib-file=…/…/app/resources/models/maskdetection/model_mask1/cal.bin
tlt-encoded-model=…/…/app/resources/models/maskdetection/model_mask1/ssd_resnet18_epoch_100.etlt
tlt-model-key=tlt_encode
infer-dims=3;544;960
uff-input-order=0
uff-input-blob-name=Input
batch-size=1
##0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
num-detected-classes=2
interval=0
gie-unique-id=1
is-classifier=0
output-blob-names=NMS
parse-bbox-func-name=NvDsInferParseCustomSSDTLT
custom-lib-path=/opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_infercustomparser.so[class-attrs-all]
threshold=0.2
pre-cluster-threshold=0.3
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0
I also wanted to use int8, but when using this network mode nothing is inferenced at all.