Difference in accuracy between TLT3 and deepstream SGIE (Detectnet 2)

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) DGPU / Jetson NX
• DeepStream Version 5.x
• JetPack Version (valid for Jetson only) 4.6
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

We have built a Deepstream application that uses the TLT(3) Detectnet v2 Object detection example that detects 1 class (lets say car). The accuracy of the trained model is quite good. For further specification of the car make, we trained another model → an image classifier, again folowing the example notebook in TLT. This model will work as a secondary classifier (SGIE).

The image classifier is trained using 5k+ images and the inference results IN TLT are really good (mAP >92%)
Now when we try the image classifier in the deepstream app, the accuracy is very very poor.

According to other forum posts, I’ve been fiddling with the “Offsets” values’ in the SGIE config file which changes the inference accuracy, but there’s no documentation about the offsets values, So I can’t tell which values to tweak.
While looking for similar posts, I saw there were even differences isn what offsets to use between the different forum moderators. Which values should I use?

The TLT trainingspec is
the default classifier

Here’s the configfile for the SGIE:

classifier-threshold, is-classifier

Optional properties for classifiers:

classifier-async-mode(Secondary mode only, Default=false)

Optional properties in secondary mode:

operate-on-gie-id(Default=0), operate-on-class-ids(Defaults to all classes),

input-object-min-width, input-object-min-height, input-object-max-width,

input-object-max-height

Following properties are always recommended:

batch-size(Default=1)

Other optional properties:

net-scale-factor(Default=1), network-mode(Default=0 i.e FP32),

model-color-format(Default=0 i.e. RGB) model-engine-file, labelfile-path,

mean-file, gie-unique-id(Default=0), offsets, process-mode (Default=1 i.e. primary),

custom-lib-path, network-mode(Default=0 i.e FP32)

The values in the config file are overridden by values set through GObject

properties.

[property]

The gie unique id is the main ID which helps in acessing objects of config file which is currently processed.

gie-unique-id=2

Scaling images

net-scale-factor=1

Path to the tlt model

model-engine-file=models/secondary/final_model.trt

Path to label file

labelfile-path=models/secondary/labels.txt

Input dims of the image

input-object-min-width=50
#input-object-min-height=2

Model-color-format(Default=0 i.e. RGB)

model-color-format=0

offsets=124;117;104

force-implicit-batch-dim=1

0=FP32, 1=INT8, 2=FP16 mode

network-mode=1

0=Detector, 1=Classifier, 2=Segmentation, 100=Other

network-type=1

num-detected-classes=3

Class IDs of the parent GIE on which this GIE must operate.

The parent GIE is specified using operate-on-gie-id.

operate-on-class-ids=0
operate-on-gie-id=1

It is used when we are using Classifier after detection

classifier-async-mode=1
classifier-threshold=0.5

#scaling-filter=0
#scaling-compute-hw=0

Sorry for the late response, is this still an issue to support?

Thanks

Yes it’s still open

Hi @KGerry ,
Sorry for long delay!

The offset and scale value you should set in DS GIE config depends on the offset and scale you used in TLT training.

Thanks!

There is no offsets specified in the default TLT detectnet specfile, nor have I ever seen anyone setting offsets in the specfile. If I don’t use the default offsets values as provided in the Detectnet ds gie example, the classification even seems worse. In tlt my mAP is 95 and inference on testimages is spot on. In ds it’s just rubbish.

Moving this topic from Deepstream forum into TAO forum.

@KGerry
Please modify your setting as below.
offsets=103.939;116.779;123.68
model-color-format=1

Refer to Issue with image classification tutorial and testing with deepstream-app - #21 by Morganh

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.