Inferring resnet18 classification etlt model with python

now the issue is in images.
File “”, line 49, in normalize_image
return np.asarray(image.resize((w, h), Image.ANTIALIAS)).transpose([2, 0, 1]).astype(trt.nptype(trt.float32)).ravel()
ValueError: axes don’t match array
Any idea ? @jazeel.jk

@senbhaskar26 what is the input shape of your image? is it a colour image or greyscale image?
i am not sure about the preprocessing part… i am able to load the image and predict without any error… but the output model predicts, is not right… but the same model gives correct output with tlt-infer… that’s what i was discussing with @Morganh .

in your case were you using resnet18 pretrained model for training ? there could be different preprocessing step for different models…

I have used resnet18 model as well. I have changed input shape of the image as it is trained on (3,224,224) . not sure what is happening with the pre-processing . @jazeel.jk

How did you generate the etlt file and trt engine?
With fp16 or int8?

i have a trt model made with fp32, fp16 and also int8…
In the code i shared above, I was inferring with the model made with fp32…

@Morganh, for classification should we necessarily use int8? can’t we use float16 or float32 ? In the Jupyter notebook for classification only int8 is mentioned… that’s why asking…

All(fp32,fp16, int8) can be set.

1 Like

Thanks @Morganh ,
And in the below lines,

h_input = cuda.pagelocked_empty(trt.volume(engine.get_binding_shape(0)), dtype=trt.nptype(ModelData.DTYPE))
h_output = cuda.pagelocked_empty(trt.volume(engine.get_binding_shape(1)), dtype=trt.nptype(ModelData.DTYPE))

ModelData.DTYPE should be trt.float32 for fp32… and trt.int8 for int8 models , is it?

For your fp32 trt engine, you can change to np.float32 and try.

More, suggest you to run trt engine with deepstream to check if there is the same issue.

In tlt classification, it does not provide inference against the trt engine.

Yes @Morganh ,
trt.float32 and np.float32 gives same output only… when i am integrating the classification model with deepstream, it’s throwing an error,

2021-02-09 13:23:40.634477: I tensorflow/stream_executor/platform/default/] Successfully opened dynamic library
[libprotobuf ERROR google/protobuf/] Error parsing text-format nvdsinferserver.config.PluginControl: 2:1: Extension "property" is not defined or is not an extension of "nvdsinferserver.config.PluginControl".
0:00:03.723046581 30527 0x55afedc22640 WARN           nvinferserver gstnvinferserver_impl.cpp:393:start:<primary_gie> error: Configuration file parsing failed
0:00:03.723059959 30527 0x55afedc22640 WARN           nvinferserver gstnvinferserver_impl.cpp:393:start:<primary_gie> error: Config file path: /opt/nvidia/deepstream/deepstream-5.0/samples/classification_pepsi/config_classification.txt
0:00:03.723081007 30527 0x55afedc22640 WARN           nvinferserver gstnvinferserver.cpp:460:gst_nvinfer_server_start:<primary_gie> error: gstnvinferserver_impl start failed
** ERROR: <main:655>: Failed to set pipeline to PAUSED
ERROR from primary_gie: Configuration file parsing failed
Debug info: gstnvinferserver_impl.cpp(393): start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInferServer:primary_gie:
Config file path: /opt/nvidia/deepstream/deepstream-5.0/samples/classification_pepsi/config_classification.txt
ERROR from primary_gie: gstnvinferserver_impl start failed
Debug info: gstnvinferserver.cpp(460): gst_nvinfer_server_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInferServer:primary_gie
App run failed

For how to run inference with classification model in deepstream, refer to Issue with image classification tutorial and testing with deepstream-app - #12 by Morganh

Hi @Morganh ,
Thank you so much for the help. I could successfully able to run the exported model with deepstream. So i have a video in which frames are expected to be belong to positive class… but yea they are predicted as negative. all the frames where predicted as negative…

@Morganh ,
So both fp32 and int8 etlt models are giving wrong outputs with deepstream also. Is there a way to run .tlt models with deepstream?

it’s with .tlt file, tlt-infer classification works… both .trt and .etlt is not supported there with tlt-infer classification…

can i run and test the same .tlt file with deepstream?

That is not possible i guess… need to export as .etlt file to integrate it with deepstream, isn’t it?
so what could be going wrong? .tlt file giving right results with tlt-infer… exported file of same .tlt model is giving wrong results with python script and also with deepstream…

In deepstream, only etlt file or trt engine can be deployed.

1 Like

Yea got it… @Morganh , so what could be the reason why etlt model wrongly classifies images/frames with deepstream and also with python code?

Can you run tlt-infer against your training images instead of test images?
Then run your standalone python images against training images too.
31 images is a bit less.

More, how is the training and evaluation result previously?

Training and evaluation accuracy were above 95%…

Hi @Morganh,
I ran tlt-infer with .tlt model, against my training images… out of 1606 images, 1603 images classified as positive and 3 of them as negative. Now when i ran python script with .trt file of the same model against my training images, 1431 images were classified as negative. Only 175 images classified as positive…

Thanks for the info. I am checking further.

1 Like