How to integrate detections and classifications models in pytroch framework to deepstream?

1、How to integrate the multiple detection and classification pytorch models trained by ourselves into deepstream? The pytorch detection and multiple classification models trained by myself were converted into engine in DS, which can be run with error classifications. Because the original ROI images after detection is pre-processed before being sent to the classification models in pytorch framework, however I does not know how to put my pre-processing modules behind detection in DS .
2、The official demos are all caffemodels. The training courses use NVIDIA’s own trt and TAO, and does not teach pytorch framework in DS. The training content seems not to match what I want to learn. Do you have source code tutorials for deepstream?

Are you working with deepstream-app sample?
Do you mean you have a detector model (such as Yolo which can detect object and mark the bboxes) and several classifier models(which can classify the object detected by the detector). An your classifier models need the detected objects be pre-procssed by some special algorithm?

If so, you can set the detector with nvinfer as primary model (process-mode=1) and set the classifier models’s nvinfer configuration as secodary mode(process-mode=2). Then nvinfer will crop the objects with bboxes and do the pre-processing inside it. Currently, nvinfer supports scaling, color format conversion and normalization pre-processing. Gst-nvinfer — DeepStream 6.1.1 Release documentation
There is also samples for primary GIE + secondary GIEs (/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt) for deepstream-app.
Can you list what kind of pre-processing do you need?
Can you send the nvinfer configurations of your models here?

We have documents for all modules and APIs. C/C++ Sample Apps Source Details — DeepStream 6.1.1 Release documentation And the open source codes are available in /opt/nvidia/deepstream/deepstream/sources in your device after the SDK is installed correctly.

“mean” is offsets. net-scale-factor you can try 1/0.229, 1/0.224 and 1/0.225, which one is better you may use that one. You don’t need mean.ppm file.

This is our formula: Gst-nvinfer — DeepStream 6.1.1 Release documentation

This is pytorch formula: Normalize — Torchvision main documentation (

classifier-threshold depends on your classifier model’s post-processing.

color_index = {0: ‘黑色’, 1: ‘蓝色’, 2: ‘棕色’, 3: ‘金色’, 4: ‘灰色’, 5: ‘绿色’, 6: ‘橙色’, 7: ‘紫色’, 8: ‘红色’, 9: ‘银色’, 10: ‘白色’, 11: ‘黄色’}

def postprocess(self,results):
print(results.as_numpy(‘output’)) # [[ -5.837885 -9.479594 -6.5933547 -11.668275 -11.712275 -11.332783 -10.177577 -6.1661963 -2.4280329 -4.6922846 10.526807 -6.9432917]]
value,result = torch.Tensor(results.as_numpy(‘output’)).max(1)
print(value,result) #tensor([10.5268]),tensor([10])
confidence = torch.softmax(torch.Tensor(results.as_numpy(‘output’)[0]),dim=0)
print(confidence) #tensor([7.8145e-08, 2.0480e-09, 3.6712e-08, 2.2951e-10, 2.1963e-10, 3.2100e-10, 1.0191e-09, 5.6276e-08, 2.3647e-06, 2.4571e-07, 1.0000e+00, 2.5872e-08])
index = result.item()
print(index) #10
conf = confidence[index].numpy().item()
print(conf) 0.9999972581863403
return index,conf
预测结果{‘color’: ‘白色’, ‘index’: 10, ‘conf’: 0.9999972581863403}

The postprocess only sorts the output with softmax, then we get the max conf and label index. I do not have classifier-threshold. I do not know if the DS supports this postprocessing. which plugin function can I use to write my own postprocessing code in DS6.0.1 if not supported.

The default classifier postprocessing algorithm can be found in /opt/nvidia/deepstream/deepstream/sources/libs/nvdsinfer/nvdsinfer_context_impl_output_parsing.cpp, the function name is “ClassifyPostprocessor::parseAttributesFromSoftmaxLayers()”

If it does not match your postprocess, please customize the postprocess algorithm. For your case, you can use “parse-classifier-func-name” and “custom-lib-path” to enable your own cuatomized post-processing algorithm. Gst-nvinfer — DeepStream 6.1.1 Release documentation

The sample NVIDIA-AI-IOT/deepstream_lpr_app: Sample app code for LPR deployment on DeepStream ( shows how to use “parse-classifier-func-name” and “custom-lib-path” to customize classifier postprocessing.

Please read our document and sample codes carefully.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.