Preprocessing parameters: These are the same for all classification models generated by TLT?

• DeepStream Version: 4.0.2

The documentation of DeepStream and TLT seems vague on many cases, but for my current project i need to understand what the following content means exactly:

# preprocessing parameters: These are the same for all classification models generated by TLT.
net-scale-factor=1.0
offsets=123.67;116.28;103.53
model-color-format=1
batch-size=30

Is the phrase “These are the same for all classification models generated by TLT” valid for all those options? At least for the batch size it does not seem to make any sense to me.

And why are we supposed to use those exact numbers for net-scale-factor and offsets. net-scale-factor might make sense since we do not want change the pixel data in general, but why do we need to adjust every channel?

You can change the parameters according to your models, pls refer https://docs.nvidia.com/metropolis/deepstream/plugin-manual/index.html#page/DeepStream%20Plugins%20Development%20Guide/deepstream_plugin_details.3.01.html#wwpID0E0OFB0HA for all the details of the config item.

Thanks for your reply. Unfortunately, the answer does not my question. We are currently working on an implementation of Deepstream and TLT, but we are not able to understand why the results from tlt-infer differ from those in DeepStream. The accuracy loss for some models is around 20%. We tried (hopefully) all possible settings and we think that the normalization might be the reason for the discrepancy.

We were not able to find a hint normalization during training in TLT source code. Where do the numbers 123.67;116.28;103.53 come from?

Hey customer,
Pls create a topic in TLT forum, you will get a better answer there.
BTW, it will be better to use DS 5.0