Secondary model not working when using nvdspreprocess

Hello, I am using DeepStream 6.2 in its official Docker container. I am using the Python bindings and a Tesla T4.
I have working secondary model and I intend to modify it to move the preprocessing to the gst-nvdspreprocess plugin.
The model was used to classify vehicles. Since I added the preprocessing plugin, I can’t find any metadata attached to the vehicles. So I suspect the model is not running.
This is the working secondary model:

# Following properties are mandatory when engine files are not specified:
#   int8-calib-file(Only in INT8)
#   Caffemodel mandatory properties: model-file, proto-file, output-blob-names
#   UFF: uff-file, input-dims, uff-input-blob-name, output-blob-names
#   ONNX: onnx-file
#
# Mandatory properties for detectors:
#   num-detected-classes
#
# Optional properties for detectors:
#   cluster-mode(Default=Group Rectangles), interval(Primary mode only, Default=0)
#   custom-lib-path,
#   parse-bbox-func-name
#
# Mandatory properties for classifiers:
#   classifier-threshold, is-classifier
#
# Optional properties for classifiers:
#   classifier-async-mode(Secondary mode only, Default=false)
#
# Optional properties in secondary mode:
#   operate-on-gie-id(Default=0), operate-on-class-ids(Defaults to all classes),
#   input-object-min-width, input-object-min-height, input-object-max-width,
#   input-object-max-height
#
# Following properties are always recommended:
#   batch-size(Default=1)
#
# Other optional properties:
#   net-scale-factor(Default=1), network-mode(Default=0 i.e FP32),
#   model-color-format(Default=0 i.e. RGB) model-engine-file, labelfile-path,
#   mean-file, gie-unique-id(Default=0), offsets, process-mode (Default=1 i.e. primary),
#   custom-lib-path, network-mode(Default=0 i.e FP32)
#
# The values in the config file are overridden by values set through GObject
# properties.

[property]

# PATHS
model-engine-file=/models/vehicles/vehicles.engine
labelfile-path=/src/src/pipeline/models/vehicles_trt/pipeline/labels.txt
# onnx-file=... can be used to compile if there is no engine

# SETTINGS
gpu-id=0
gie-unique-id=11
is-classifier=1
process-mode=2

# PERFORMANCE
# 0=FP32 | 1=INT8 | 2=FP16 mode
network-mode=0
classifier-async-mode=0
force-implicit-batch-dim=1
# batch-size=16 --> value set in Python code

# FILTER
operate-on-gie-id=1
input-object-min-width=64
input-object-min-height=64
# the following class-ids are remapped, see src/settings/deepstream_secondary_models_mapping.py
operate-on-class-ids=202;302;602;702;207;307;607;707

# PREPROCESSING
model-color-format=0
# net-scale-factor = 1 / ( 255 * STD)  where STD is the PyTorch normalization STD
net-scale-factor=0.01735207357279195
# offsets = MEAN / (net-scale-factor * STD)  where MEAN, STD are the PyTorch normalization MEAN, STD
offsets=123.675;116.28;103.53
maintain-aspect-ratio=1
symmetric-padding=1

# POSTPROCESSING
output-blob-names=vehicle_color;vehicle_type
classifier-threshold=0
output-tensor-meta=0

And these is how I edit that file when I added the gst-nvdspreprocess plugin:

# Following properties are mandatory when engine files are not specified:
#   int8-calib-file(Only in INT8)
#   Caffemodel mandatory properties: model-file, proto-file, output-blob-names
#   UFF: uff-file, input-dims, uff-input-blob-name, output-blob-names
#   ONNX: onnx-file
#
# Mandatory properties for detectors:
#   num-detected-classes
#
# Optional properties for detectors:
#   cluster-mode(Default=Group Rectangles), interval(Primary mode only, Default=0)
#   custom-lib-path,
#   parse-bbox-func-name
#
# Mandatory properties for classifiers:
#   classifier-threshold, is-classifier
#
# Optional properties for classifiers:
#   classifier-async-mode(Secondary mode only, Default=false)
#
# Optional properties in secondary mode:
#   operate-on-gie-id(Default=0), operate-on-class-ids(Defaults to all classes),
#   input-object-min-width, input-object-min-height, input-object-max-width,
#   input-object-max-height
#
# Following properties are always recommended:
#   batch-size(Default=1)
#
# Other optional properties:
#   net-scale-factor(Default=1), network-mode(Default=0 i.e FP32),
#   model-color-format(Default=0 i.e. RGB) model-engine-file, labelfile-path,
#   mean-file, gie-unique-id(Default=0), offsets, process-mode (Default=1 i.e. primary),
#   custom-lib-path, network-mode(Default=0 i.e FP32)
#
# The values in the config file are overridden by values set through GObject
# properties.

[property]

# PATHS
model-engine-file=/models/vehicles/vehicles.engine
labelfile-path=/src/src/pipeline/models/vehicles_trt/pipeline/labels.txt
# onnx-file=... can be used to compile if there is no engine

# SETTINGS
gpu-id=0
gie-unique-id=11
is-classifier=1
# process-mode=2 (ignored)

# PERFORMANCE
# 0=FP32 | 1=INT8 | 2=FP16 mode
network-mode=0
classifier-async-mode=0
force-implicit-batch-dim=1
batch-size=16 --> value set in Python code

# FILTER
operate-on-gie-id=1
input-object-min-width=64
input-object-min-height=64
# the following class-ids are remapped, see src/settings/deepstream_secondary_models_mapping.py
operate-on-class-ids=202;302;602;702;207;307;607;707

# PREPROCESSING
#### model-color-format=0
#### # net-scale-factor = 1 / ( 255 * STD)  where STD is the PyTorch normalization STD
#### net-scale-factor=0.01735207357279195
#### # offsets = MEAN / (net-scale-factor * STD)  where MEAN, STD are the PyTorch normalization MEAN, STD
#### offsets=123.675;116.28;103.53
#### maintain-aspect-ratio=1
#### symmetric-padding=1

# POSTPROCESSING
output-blob-names=vehicle_color;vehicle_type
classifier-threshold=0
output-tensor-meta=0

Finally, this is the config file for gst-nvdspreprocess:

################################################################################
# SPDX-FileCopyrightText: Copyright (c) 2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
################################################################################

# The values in the config file are overridden by values set through GObject
# properties.

[property]
enable=1
unique-id=111
gpu-id=0
target-unique-ids=11
operate-on-gie-id=1
operate-on-class-ids=202;302;602;702;207;307;607;707
# 0=NCHW, 1=NHWC, 2=CUSTOM
network-input-order=0
process-on-frame=0
# if enabled maintain the aspect ratio while scaling
maintain-aspect-ratio=1
# if enabled pad symmetrically with maintain-aspect-ratio enabled
symmetric-padding=1
# processing width/height at which image scaled
processing-width=224
processing-height=224
scaling-buf-pool-size=6
tensor-buf-pool-size=6
# tensor shape based on network-input-order
network-input-shape=16;3;224;224
# 0=RGB, 1=BGR, 2=GRAY
network-color-format=0
# 0=FP32, 1=UINT8, 2=INT8, 3=UINT32, 4=INT32, 5=FP16
tensor-data-type=0
tensor-name=images
# 0=NVBUF_MEM_DEFAULT 1=NVBUF_MEM_CUDA_PINNED 2=NVBUF_MEM_CUDA_DEVICE 3=NVBUF_MEM_CUDA_UNIFIED
scaling-pool-memory-type=0
# 0=NvBufSurfTransformCompute_Default 1=NvBufSurfTransformCompute_GPU 2=NvBufSurfTransformCompute_VIC
scaling-pool-compute-hw=0
# Scaling Interpolation method
# 0=NvBufSurfTransformInter_Nearest 1=NvBufSurfTransformInter_Bilinear 2=NvBufSurfTransformInter_Algo1
# 3=NvBufSurfTransformInter_Algo2 4=NvBufSurfTransformInter_Algo3 5=NvBufSurfTransformInter_Algo4
# 6=NvBufSurfTransformInter_Default
scaling-filter=0
custom-lib-path=/opt/nvidia/deepstream/deepstream/lib/gst-plugins/libcustom2d_preprocess.so
custom-tensor-preparation-function=CustomTensorPreparation

[user-configs]
pixel-normalization-factor=0.01735207357279195
offsets=123.675;116.28;103.53
# channel-scale-factors
# channel-mean-offsets

Please note that I based my code on the official example from the deepstream python apps: deepstream_python_apps/apps/deepstream-preprocess-test at master · NVIDIA-AI-IOT/deepstream_python_apps · GitHub . However, I can’t find an example for secondary models.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

We currently do not have any relevant Python demo about this. But could you refer to our source c/c++ code to learn how to link the preprocess to sgie and how to configure the config file? Tnanks

sources\apps\sample_apps\deepstream-preprocess-test

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.