fail to run app on nano

hi. Recently, I want to run app on nano, but I get some confusing errors. the detailed information is as follows:
ERROR: getPluginCreator could not find plugin FancyActivation version 001 namespace
ERROR: Cannot deserialize plugin FancyActivation
ERROR: getPluginCreator could not find plugin FancyActivation version 001 namespace
ERROR: Cannot deserialize plugin FancyActivation
ERROR: getPluginCreator could not find plugin FancyActivation version 001 namespace
ERROR: Cannot deserialize plugin FancyActivation
ERROR: getPluginCreator could not find plugin FancyActivation version 001 namespace
ERROR: Cannot deserialize plugin FancyActivation
ERROR: getPluginCreator could not find plugin FancyActivation version 001 namespace
ERROR: Cannot deserialize plugin FancyActivation
ERROR: getPluginCreator could not find plugin FancyActivation version 001 namespace
ERROR: Cannot deserialize plugin FancyActivation
ERROR: getPluginCreator could not find plugin ResizeNearest version 001 namespace
ERROR: Cannot deserialize plugin ResizeNearest
ERROR: getPluginCreator could not find plugin FancyActivation version 001 namespace
ERROR: Cannot deserialize plugin FancyActivation

I guess these errors may caused by TensorRT. So I hope someone have solved this problem could give me a solution. thanks a lot.

Hi,

We will need more information to give a further suggestion.

Based on the log you shared, there are some plugin implementation inside your model.
So please remember add the corresponding plugin layer when deserializing the model.

PluginFactory pluginFactory;
ICudaEngine* engine = runtime->deserializeCudaEngine(trtModelStream->data(), trtModelStream->size(), &<b>pluginFactory</b>);

Thanks.

hi, bro. thank you for your answer. Actually, I don’t put any plugin layer inside my model. And I just transplant my apps from windows10 to ubuntu 18.04, so i don’t know whether there are some difference between windows10 and ubuntu on transforming trt model. could you please give some suggestions? thanks a lot.

Hi,

TensorRT engine cannot be used cross-platform and system.
Please regenerate a TensorRT engine from your model on the Linux environment.

Thanks.

hi,
you mean i should reinstall TensortRT on Linux? I have installed tensorrt on Linux, but i still use yolo.weights on darknet with windows, should I retrain my model file on linux?

Hi,

May I know how do you run YOLO on the Linux?
Suppose it is Jetson platform, right?

Which source code do you use?
For Nano, we have a sample for deploying YOLO with TensorRT:
[url]https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps/tree/master/yolo[/url]

Thanks.

Hi, bro. Thanks for your answer. I have been running my program on windows before. This is a program about video analysis. Since I bought nano, I intend to transplant some function in the program to nano, so I recently compiled and run my program on linux. At the same time, I also converted my model I train on windows. But I have encountered such a problem, I will take your advice just use the official sample. Thanks a lot.

Hi, bro. Thanks for your answer. I have been running my program on windows before. This is a program about video analysis. Since I bought nano, I intend to transplant some function in the program to nano, so I recently compiled and run my program on linux. At the same time, I also converted my model I train on windows. But I have encountered such a problem, I will take your advice just use the official sample. Thanks a lot.

Hi,

For YOLO, it’s recommended to check our deepstream SDK first.
There is a sample to demonstrate how to enable YOLOv2, YOLOv2_tiny, YOLOv3 and YOLOv3_tiny on Nano.
https://developer.nvidia.com/deepstream-sdk

$ /opt/nvidia/deepstream/deepstream-4.0/sources/objectDetector_Yolo$

Thanks.