Preprocessing and Postprocessing in deepstream python

• Hardware Platform (Jetson / GPU) : Jetson
• DeepStream Version : 7.1
• JetPack Version (valid for Jetson only) : 6.2
• TensorRT Version : 10.3.0.30

Hi,

I am working on Deepstream with Python - GitHub - NVIDIA-AI-IOT/deepstream_python_apps: DeepStream SDK Python bindings and sample applications using nvcr.io/nvidia/deepstream:7.1-triton-multiarch docker image

Where, I successfully created the app with yolo algorithm to detect persons, now, i am looking to create the secondary model sgie to predict the gender of that person.

The main challenge is in the pipeline, how to do the preprocessing of the objects detected from the pgie and then feed to sgie as input.

I am using a prob function to get the object bbox details of the pgie, but how to link the preprocessing instruction in the pipeline which is required for the sgie. Is the preprocessing instruction has to be linked like a prob function before sgie?

Another clarification, currently, i am using the prob to take the output of pgie, if i added sgie, then how to get that output along with the pgie outputs.

I need to know the hierarchy for this scenario, help me to understand this flow, thanks

Please refer to our source code sample sources\apps\sample_apps\deepstream-test2 first. This sample demonstrates how to first detect a vehicle, and then classify the type of the vehicle.
Pipeline:

source->nvstreammux->pgie->sgie->nvdsosd->sink

Please set the config files according to your sgie model.

@yuweiw, thanks for writting back,

I go through the test2 app, i understand the flow, but the sgie model needs little bit of custom preprocessing, I need to do that from the output prediction of pgie. for that, i need the flow how to achieve that.

How to link that preprocessing function, is there any components available to do?

Yes. You can modify the pipeline like below.

source->nvstreammux->pgie->nvpreprocess->sgie->nvdsosd->sink

You can refer to our deepstream-pose-classification. The pipeline is similar to your scenario.

this is in cpp code, is there any example in python available for reference
please let me know

Need to clarify this →

Statement 1 - the pgie gives the coordinates of the objects for the input stream resolution

Statement 2 - the sgie needs the cropped object image with preprocessed using some instructions

these two statements are necessary,

is the deepstream, automatically cropped the image based on the pgie output and feeds as input to sgie, - is that the way it works? if so, how this is happening, what is the flow of it. In the apps, i seen, we are just link different components and it works fine, but where this flow of cropping the objects based on the pgie output is defined, I mean which parameter handling this input for the sgie, If known, i can check for adding prob to process that input.

And, how to create the labels.txt file if the models has multiple head, like one head for classification and other for regression value, how it will be handled in labels.txt file. And how to make the sgie config file for updating the other parameters, like num-detected-classes, etc

please help me understand this, thanks

No. The pgie just sends the coordinates of the object, not crops the images. If you want to know the detailed process, please refer to the nvinfer diagram to read our open-source code sources\gst-plugins\gst-nvinfer\gstnvinfer.cpp.

This needs to be configured according to your own model.

Could you please share the document for this or any reference

sources\gst-plugins\gst-nvinfer\gstnvinfer.cpp - i will go through it, I am regaining the c/cpp syntaxs

thanks for sharing this

This should be provided by the organization who trained the model. DeepStream is used for deploying the model. It won’t generate the label file.

I am seeking the syntax for the labels.txt file

For eg., from the model there we are getting three values, the first two values is the probability values for the classification task, and the last value is for regression task. If a model arch. is like this, then in what format the labels.txt should be?

You can refer to our source code samples\models\Secondary_VehicleMake\labels.txt.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks