Issue with image classification tutorial and testing with deepstream-app

Thanks for the info. So, your tlt-infer can work well.
BTW, we can run inference in directory mode to run on a set of test images.
According to the jupyter notebook, the sample command is as below.

tlt-infer classification -m $USER_EXPERIMENT_DIR/output_retrain/weights/resnet_$EPOCH.tlt
-k $KEY -b 32 -d $DATA_DOWNLOAD_DIR/split/test/person
-cm $USER_EXPERIMENT_DIR/output_retrain/classmap.json

The inference result will be saved in test/person/result.csv

Hi @Morganh

THanks for the hint. I’ve run inference commands against my testing images as per your advice and the results look good to me. Please see the attached file. I issued tlt-infer command agains each category directory (hence 3 files)result_original_Good.csv (87.5 KB) result_original_Leakage.csv (64.1 KB) result_original_Scratch.csv (84.4 KB)

Please refer to

How about running mp4 file instead of rtsp?
Could you provide more details about " Unfortunately this is where the model stops working". Any log?

Firstly, please run below example and make sure it can run.

Try to config with your rtsp file instead of mp4 file. And make sure it can run.

If above works, please check again your deepstream config along with the inference-config file.
In your inference config file, please modify
infer-dims=3;224;224 to infer-dims=3;224;224;0
net-scale-factor=1 to net-scale-factor=1.0

More, actually there are two ways running a classification model with deepstream.

  1. Run as primary gie
  2. Run as secondary gie

If set as primary gie,
please set process-mode=1 in the inference config file.

If set as secondary gie,
please set process-mode=2 in the inference config file.

Hi @Morganh,

Changing infer-dims=3;224;224 to infer-dims=3;224;224;0 produces the error:

Error. 'infer-dims' array length is 4. Should be 3 as [c;h;w] order.
Failed to parse group property

Did you mean input-dims=3;224;224;0

I did that and it didn’t make any difference.

In the mean time I have been trying to follow a different path: train TensoFlow model and convert it to onnx ( I managed to do it using Resnet50V2 as a base (Resnet50 didn’t work since it had to be converted to onnx with opset 10 which jetson nano TensorRT doesn’t support) and it all worked. I used this config file:



## 0=FP32, 1=INT8, 2=FP16 mode


I am going to give NVIDIA’s tutorial another go and train Resnet model with 50 layers and see if it makes any difference. It does take alot of time though to train on about 7000 images.

I think you can close this ticket since I found a different solution to my problem but I do think that the tutorial needs polishing.

Yes, it is input-dims=3;224;224;0

Glad to see that you have the solution now.

Actually, when you deploy the tlt model in the config file. It should work.
For example, if you trained a two-classes(person and another class) model with TLT classification network, then you can run inference with below two ways in deepstream.

  1. Work as primary trt engine
    ds_classification_as_primary_gie (3.4 KB)
    config_as_primary_gie.txt (741 Bytes)

nvidia@nvidia:/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app$ deepstream-app -c ds_classification_as_primary_gie

  1. Work as secondary trt engine
    ds_classification_as_secondary_gie (3.6 KB)
    config_as_secondary_gie.txt (741 Bytes)

nvidia@nvidia:/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app$ deepstream-app -c ds_classification_as_secondary_gie

Yes - as I said I configured three class model as primary trt engine and it didn’t work for me. I only have one label displayed irrespective of the image as I have described above.

Please refer to above way I mentioned.
Note that below two lines are not changed.


And please add below line your your infer spec.


Hi @Morganh ,
I could integrate and run my classification model with deepstream… But the classified outputs are wrong with deepstream. With tlt-infer and standalone python program, classification is correct… Do i need to change some parameters in config file to get the results correct?

Do i need to do some RGB-BGR conversion in the configuration file ? @Morganh

The model-color-format should be “1” for BGR configuration.

model-color-format = 1

You can try to change below , to check if it helps.



offsets = 103.939;116.779;123.68

Thanks @Morganh ,
Yea my model-color-format is set as 1. And i have replaced the previous offset with 103.939;116.779;123.68…but still all the frames which are supposed to be belonging to positive class is predicted as negative class…

Thanks for the info. I will check further.

1 Like

As mentioned above, please modify to


I confirm that it can get the same result as tlt-infer.
Work as primary trt engine
ds_classification_as_primary_gie (3.4 KB)
config_as_primary_gie (3).txt (743 Bytes)

More, please double check your label file.
Yours should be


@Morganh ,
I checked it, i used the same ds_classification_as_primary_gie as config file for deepstream-app, and label file is in the order of classmap.json… means first one negative and second one positive…
Is there a way to get the predicted outputs printed on the terminal ?

With the step I mentioned above, the predicted output will show at the top left corner of the monitor.
For other ways, please search or ask in deepstream forum.

How did you generate the video file for running in deepstream?
Please consider below way.
gst-launch-1.0 multifilesrc location="/tmp/%d.jpg" caps=“image/jpeg,framerate=30/1” ! jpegdec ! x264enc ! avimux ! filesink location=“out.avi”

Hi @Morganh
I didn’t generate the video. I got it from my colleague. Video is not an issue. I successfully used it with onnx model where classification worked correctly on deepstream-app.

To narrow down, you can try to run standalone python script to do inference against the trt engine.
Reference: Inferring resnet18 classification etlt model with python - #41 by Morganh
Per my test result, it can get the same result as tlt-infer.

Hi @Morganh , does this expects images with name 0.jpg, 1.jpg, 2.jpg etc… in the image directory (say temp folder) ?