How to use custom-trained image classification model with dstest2_sgie3_config.txt

Please provide complete information as applicable to your setup.

• Hardware Platform: Jetson Orin Nano
• DeepStream Version: 6.3
• JetPack Version: 5.1.2
• TensorRT Version: 8.5.2

Hi NVIDIA Developer

I want to know how to use custom-trained image classification model in the DeepStream SDK Python. These are my custom-trained model details:

  1. Train with Pytorch in Google Colab (I have already converted it to onnx and TensorRT format and can inference successfully using the NVIDIA Jetson Orin Nano without using DeepStream)

  2. This model predict 6 car types e .g. Bus, Motorbike, SUV, Truck, pickup and sedan

  3. This is some image pre-processing steps in Pytorch (resize image, convert to tensor and normalize it)

cartype_prep = transforms.Compose([transforms.Resize((500, 500)),
transforms.ToTensor(),
transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225))])

I also see dstest2_sgie3_config.txt in
https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/blob/master/apps/deepstream-test2/dstest2_sgie3_config.txt. There are many configurations in this file e.g.

[property]
gpu-id=0
net-scale-factor=1
model-file=…/…/…/…/samples/models/Secondary_VehicleTypes/resnet18.caffemodel
proto-file=…/…/…/…/samples/models/Secondary_VehicleTypes/resnet18.prototxt
model-engine-file=…/…/…/…/samples/models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu0_int8.engine
mean-file=…/…/…/…/samples/models/Secondary_VehicleTypes/mean.ppm
labelfile-path=…/…/…/…/samples/models/Secondary_VehicleTypes/labels.txt
int8-calib-file=…/…/…/…/samples/models/Secondary_VehicleTypes/cal_trt.bin
force-implicit-batch-dim=1
batch-size=16
network-mode=1
input-object-min-width=64
input-object-min-height=64
model-color-format=1
process-mode=2
gpu-id=0
gie-unique-id=4
operate-on-gie-id=1
operate-on-class-ids=0
is-classifier=1
output-blob-names=predictions/Softmax
classifier-async-mode=1
classifier-threshold=0.51
process-mode=2
#scaling-filter=0
#scaling-compute-hw=0

I want to know how to run my custom-trained model in DeepStream SDK Python, which part of this config I need to modify and do you have any example or tutorial for this case ?

Thanks

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

Gst-nvinfer does not support per channel scaling factor.
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvinfer.html

Please refer DeepStream SDK FAQ - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums for the other parameters

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.