Run custom classification model in DeepStream - Example

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) → Jetson Nano
• DeepStream Version → 5.1
• JetPack Version (valid for Jetson only) → 4.5

I’m a total beginner and trying to train a classifier model on Windows using tensorflow. I converted the model to onnx but now I can’t find a tutorial on how to run this model in DeepStream. The given examples are only detectors and an Audio Classifier. As I don’t know what to change exactly and what is done where I’m lost here. Can anyone help me out?

1 Like

Hi,

For example, you can use source8_1080p_dec_infer-resnet_tracker_tiled_display_fp16_nano.txt in the following folder:

/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/

It will link a model configure for the [primary-gie] which stands for the inference engine.
Then you can update the configure file based on your use case:

https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvinfer.html#gst-nvinfer-file-configuration-specifications

Ex.

[property]
gpu-id=0
net-scale-factor=0.0078431372
offsets=127.5;127.5;127.5
force-implicit-batch-dim=1
batch-size=16
model-color-format=0
model-engine-file=model.onnx_b1_gpu0_fp16.engine
labelfile-path=labels.txt
onnx-file=model.onnx
network-mode=2
num-detected-classes=9
interval=0
gie-unique-id=1
is-classifier=1
output-blob-names=predictions/Softmax
classifier-async-mode=1
classifier-threshold=0.51
input-object-min-width=128
input-object-min-height=128
operate-on-gie-id=1
operate-on-class-ids=0

Thanks.

Thank you. Can you actually explain to me why there are like a million examples for using detectors and caffemodel but no example for a classifier? This seems so weird to me :D

So I tried to change the config file:
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
onnx-file=…/…/models/Primary_Detector_Nano/thumbsmodel.onnx
labelfile-path=…/…/models/Primary_Detector_Nano/labels_thumbs.txt
batch-size=8
process-mode=1
model-color-format=0
network-mode=2
num-detected-classes=3
interval=0
gie-unique-id=1
is-classifier=1
output-blob-names=predictions/Softmax
classifier-async-mode=1
classifier-threshold=0.4
input-object-min-width=224
input-object-max-width=224
input-object-min-height=224
input-object-max-height=224

#Use these config params for group rectangles clustering mode
[class-attrs-all]
pre-cluster-threshold=0.2
group-threshold=1
eps=0.2
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0

and it’s giving me the following error:
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_mot_klt.so
gstnvtracker: Optional NvMOT_RemoveStreams not implemented
gstnvtracker: Batch processing is OFF
gstnvtracker: Past frame output is OFF
0:00:01.157265624 13073 0x7f54002390 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1716> [UID = 1]: Trying to create engine from model files

Input filename: /opt/nvidia/deepstream/deepstream-5.1/samples/models/Primary_Detector_Nano/thumbsmodel.onnx
ONNX IR version: 0.0.7
Opset version: 13
Producer name: keras2onnx
Producer version: 1.8.0
Domain: onnxmltools
Model version: 0
Doc string:

INFO: [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
ERROR: [TRT]: …/builder/cudnnBuilderUtils.cpp (427) - Cuda Error in findFastestTactic: 719 (unspecified launch failure)
ERROR: [TRT]: …/rtSafe/safeRuntime.cpp (32) - Cuda Error in free: 719 (unspecified launch failure)
terminate called after throwing an instance of ‘nvinfer1::CudaError’
what(): std::exception
Aborted (core dumped)

So I guess the model is too big? Which model could I use to make this work?
Thank you.

So I got it running. Now my problem is of course that the example is meant for detection and not as asked for classification.
Which files do I need to change in which way to make this happen and also I would like to have my MIPI camera as input.
I’m probably going to open a new topic for this but thank you for support anyways!

Hi,

Usually, we prefer to run the classification on a ROI region rather than full image.
That’s why most of our example run the inference in detection + classification manner.

For CSI camera, you can modify the [source] component for it.
Ex. /opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/source1_csi_dec_infer_resnet_int8.txt

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP 5=CSI
type=5
camera-width=1280
camera-height=720
camera-fps-n=30
camera-fps-d=1

...

Thanks.