All(fp32,fp16, int8) can be set.
Thanks @Morganh ,
And in the below lines,
h_input = cuda.pagelocked_empty(trt.volume(engine.get_binding_shape(0)), dtype=trt.nptype(ModelData.DTYPE))
h_output = cuda.pagelocked_empty(trt.volume(engine.get_binding_shape(1)), dtype=trt.nptype(ModelData.DTYPE))
ModelData.DTYPE should be trt.float32 for fp32… and trt.int8 for int8 models , is it?
For your fp32 trt engine, you can change to np.float32 and try.
More, suggest you to run trt engine with deepstream to check if there is the same issue.
In tlt classification, it does not provide inference against the trt engine.
Yes @Morganh ,
trt.float32 and np.float32 gives same output only… when i am integrating the classification model with deepstream, it’s throwing an error,
2021-02-09 13:23:40.634477: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.2 [libprotobuf ERROR google/protobuf/text_format.cc:298] Error parsing text-format nvdsinferserver.config.PluginControl: 2:1: Extension "property" is not defined or is not an extension of "nvdsinferserver.config.PluginControl". 0:00:03.723046581 30527 0x55afedc22640 WARN nvinferserver gstnvinferserver_impl.cpp:393:start:<primary_gie> error: Configuration file parsing failed 0:00:03.723059959 30527 0x55afedc22640 WARN nvinferserver gstnvinferserver_impl.cpp:393:start:<primary_gie> error: Config file path: /opt/nvidia/deepstream/deepstream-5.0/samples/classification_pepsi/config_classification.txt 0:00:03.723081007 30527 0x55afedc22640 WARN nvinferserver gstnvinferserver.cpp:460:gst_nvinfer_server_start:<primary_gie> error: gstnvinferserver_impl start failed ** ERROR: <main:655>: Failed to set pipeline to PAUSED Quitting ERROR from primary_gie: Configuration file parsing failed Debug info: gstnvinferserver_impl.cpp(393): start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInferServer:primary_gie: Config file path: /opt/nvidia/deepstream/deepstream-5.0/samples/classification_pepsi/config_classification.txt ERROR from primary_gie: gstnvinferserver_impl start failed Debug info: gstnvinferserver.cpp(460): gst_nvinfer_server_start (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInferServer:primary_gie App run failed
For how to run inference with classification model in deepstream, refer to Issue with image classification tutorial and testing with deepstream-app - #12 by Morganh
Hi @Morganh ,
Thank you so much for the help. I could successfully able to run the exported model with deepstream. So i have a video in which frames are expected to be belong to positive class… but yea they are predicted as negative. all the frames where predicted as negative…
So both fp32 and int8 etlt models are giving wrong outputs with deepstream also. Is there a way to run .tlt models with deepstream?
it’s with .tlt file, tlt-infer classification works… both .trt and .etlt is not supported there with tlt-infer classification…
can i run and test the same .tlt file with deepstream?
That is not possible i guess… need to export as .etlt file to integrate it with deepstream, isn’t it?
so what could be going wrong? .tlt file giving right results with tlt-infer… exported file of same .tlt model is giving wrong results with python script and also with deepstream…
In deepstream, only etlt file or trt engine can be deployed.
Yea got it… @Morganh , so what could be the reason why etlt model wrongly classifies images/frames with deepstream and also with python code?
Can you run tlt-infer against your training images instead of test images?
Then run your standalone python images against training images too.
31 images is a bit less.
More, how is the training and evaluation result previously?
Training and evaluation accuracy were above 95%…
I ran tlt-infer with .tlt model, against my training images… out of 1606 images, 1603 images classified as positive and 3 of them as negative. Now when i ran python script with .trt file of the same model against my training images, 1431 images were classified as negative. Only 175 images classified as positive…
Thanks for the info. I am checking further.
Please modify the preprocessing. It will fix the issue. The standalone inference result will be the same as tlt-infer. Main change is that ‘RGB’->‘BGR’ and Zero-center by mean pixel.
from keras.applications.imagenet_utils import preprocess_input
return np.asarray(image.resize((w, h), Image.ANTIALIAS)).transpose([2, 0, 1]).astype(trt.nptype(trt.float32)).ravel()
return preprocess_input(np.asarray(image.resize((w, h), Image.ANTIALIAS)).transpose([2, 0, 1]).astype(trt.nptype(trt.float32)), mode=‘caffe’, data_format=‘channels_first’).ravel()
Hi @Morganh ,
Thank you so much… It worked… The results are now same with tlt-infer and standalone script…
hey after running your script on my classification model I am getting one class always and also 0 positive negative values like this .
Yes, that script was made for testing my model… i had 2 classes “positive” and “negative”… you can edit with the name of your classes…
thanks got it working for my model. @jazeel.jk
have you tried in which you can get the accuracy at last where you can check the total accuracy of your model for all different classes.