Testing custom model failed with Deepstream python apps

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson TX2)
• DeepStream Version:5.1
• JetPack Version (valid for Jetson only):4.5.1
• TensorRT Version:7.1.3
• Issue Type:when using :

I was trying to test my custom model (unet+resnet18 trained with TAO) with the “deepstream-segmantation” it provided. But the error occurred as following

since i already convert my etlt file to engine file, my config file is set like this

can anyone provide me some hint about how to demo my model with this github?(maybe with another apps like test1 or something?) or how to solve the error. Thanks

you can find the error as below

i did notice that error but i can’t figure where the wrong is. Since it says
image

i did provide the engine file, this is my folder:
https://drive.google.com/drive/folders/1jUqpP3cTxEVrcfrzmdbJYemWhT6RZiIW?usp=sharing

or I’ll need the uff file to run this " deepstream-segmentation" app

please check the model path configured in pgie config, is there the model as you configured.

i did put my engine file and python file in the same folder, is that what you mean? Or did i need to change

image

this path to “model-engine-file=./model_isbi.engine”?

can you try to change the “batch-size=1” to “batch-size=3” in dstest_segmentation_config_industrial.txt

I’d asked for some help in our country’s nvidia branch office, they said the problem occurred when of the wrong parameter was set when converting the model. But when we changed to the right parameter, tx2’s memory isn’t enough, so i’m currently working on it. Thank you for your help, this problem is currently solved.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.