How to have more accurate classification results in deepstream?

I trained a custom classifier using nvidia’s transfer learning toolkit and the classifier works fine inside deepstream. The only problem I face is that after sometime the classifier results mess up i.e let’s say there are two classes customer and security and they are clearly distinguishable from their uniform, even though the results produced are fine for the first few seconds, I see the switch between the security to customer for an actual security guard class. And the result stays customer as long as the tracking id remains same. Any idea as to how to go about this case? Any help would be appreciated thanks.

Move this topic from DS forum to TLT forum.

Hi beefshepherd,
You trained a classification network with your own dataset(two classes, customer and security),right?
How about the training result when you run tlt-train? And how about the result for tlt-infer?

Yeah I have trained the neural network with a custom two class dataset. The training accuracy was really high and even when I run the infer command the result looks fine. But this switch happens more often when inside deepstream. The classifier works fine outside deepstream perfectly fine. But inside deepstream the behavior is a bit weird. With the switch happening sometimes and causing an issue.

Which app did you use to test, deepstream-test1-app?
Could you paste the running command along with the config file? For example, if you run with deepstream-test1-app, the default file is dstest1_pgie_config.txt. Please paste here.

More, it is better for you to share one video link for better understanding the issue.


Yeah will share, the video, along with the config.

@Morganh, here is a small snippet of the video.

and the config file is as follows,

# preprocessing parameters: These are the same for all classification models generated by TLT.

# Model specific paths. These need to be updated for every classfication model.

## 0=FP32, 1=INT8, 2=FP16 mode
# process-mode: 2 - inferences on crops from primary detector, 1 - inferences on whole frame
network-type=1 # defines that the model is a classifier.


Nice results.
Are you using back to back detectors?
What type of machine are you running the model on?
Are you using the results for just monitoring or is there some flagging or triggering that the results produce.
What kind of camera are you using USB or I.P. Cam.

Very nice

Sorry @adventuredaisy, I believe that I’m not in a position to reveal these information. We have both setup with back to back detectors and a single detector

Hi beefshepherd,
How many classes did you train in TLT? From your comments I learn that you trained the neural network with a custom two class dataset. But you set


in the config file. Please set to 2 and retry.
More, can you try


Also, does this issue happen in other deepstream app? You can cross check and figure out who is the culprit and what is the gap. Thanks.

Hi Morganh,

Yeah I have three classes. I was giving an example. Will try setting the process-mode and give it a try. We’ve written our custom deepstream app.
Wait isn’t


forcing it perform inference on the entire frame, which would defeat the purpose of using the classifier or am I wrong in that understanding.

Are there anything else that I should be checking out?

@Morganh is there an option to have a more robust way of communicating the problems apart from forum, we are also part of Nvidia’s inception program, and our development time is causing hinderance by these small glitches which we come across. Would love if there’s a way to communicate better than what we are doing now. I’ve met with other issues too which still haven’t been resolved, even though it’s been posted here.

I think there should be a bridge for “Nvidia’s inception program”. Please contact program/product manager or someone else in your company for further help from Nvidia.
In the forum, all the users are anonymous. We do not know who they are and which company they come from. And I am just focusing on the topics inside TLT forum.

Hi beefshepherd,
Can you run your trained tlt model with default deepstream-test1-app or other app? To see what is happening. If issue is not reproduced, then there might be some gaps between default gaps and your custom app.

Cool makes sense. Will share the results with after I run the test1 app with the classifier we trained using tlt

Edit: Test2 app

Hey @Morganh, it’s the same thing when I tried with deepstream-test2 app. The switching happens.

Did you try deepstream-test1-app?

@Morganh, the test1 app has no classification in it. So I don’t know how running test 1 would be helpful?

One important thing I want to ask you, which app in TLT did you train and get the tlt model? Classification network, or detectnet_v2 or Faster-rcnn or SSD?

There is a detection network, and a classification network similar to deepstream test 2 app. Where classification happens on the detected objects of the primary detection network. We used the classification jupyter notebook in TLT to train the classifier. Resnet18 classifier.