deepstream-yolo-app fails to run while loading TRT Engine

I managed to get the deepstream-yolo-app to compile but when I run it, it stops at with the following message:

Using previously generated plan file located at /mnt/deepstream_test/deepstream_reference_apps/yolo/samples/objectDetector_YoloV3/yolov3-kFLOAT-kGPU-batch1.engine
Loading TRT Engine...
deepstream-yolo-app: /mnt/deepstream_test/deepstream_reference_apps/yolo/lib/plugin_factory.cpp:41: virtual nvinfer1::IPlugin* PluginFactory::createPlugin(const char*, const void*, size_t): Assertion `m_LeakyReLULayers[m_LeakyReLUCount] == nullptr' failed.
Signal: SIGABRT (Aborted)

Process finished with exit code 1

I’ve tried digging through the code in “plugin_factory.cpp” but it didn’t provide me much help. Any suggestions to a root cause?

The tensorRT version to generate plan file should be matched to the tensorRT version when running deepstream-yolo-app

Hi ChrisDing, thanks for replying.

I only have one version of TensorRT installed, version 5.0.2. I have CUDA 10.0 installed also. I’m compiling and running on the same system.

If I remove the “yolov3-kFLOAT-kGPU-batch1.engine” file it will generate the file anew and then give me the exact same error. So I’m not entirely sure what I’m missing.

Is this with a standard yolov2 or yolov3 network or a custom network?

It’s a yolo3 network with some minor tweaks to the layers (namely this: [url]https://github.com/AlexeyAB/darknet/blob/master/cfg/yolov3_5l.cfg[/url]) and retrained on a custom dataset using the darknet53 weights.

I could be wrong, but in my understanding that should work just fine.

We have not tried the model that you have referenced. Can you check if the standard models specified in the README works as expected ? If yes, then the issue is due to the custom network. Since the error you are seeing originates in plugin layer (leaky relu) you may want to try increasing the max leaky relu layer count to 86 from 72, clean build plugin, install and run the app again.

I say 86 because a quick look at your config indicates that there are 86 leaky relu layers in your network. If that number is different then please change it accordingly. If the number of yolo layers are different make that change as well.

Thank you for that insight. I’ll take a crack at that this weekend and report back. I appreciate the assistance!

Hi NvCJR, it looks like your suggestion was the way to go. I had to make a few additional adjustments in the nvyolo plugin (in plugin_factory.h) to conform to my model but ultimately it works! I appreciate your assistance.

One last question, does the “deepstream-yolo-app” sample app run inferencing on every frame or does it do it on one frame and track for the next x frames as the “deepstream-test” sample apps do?

Nevermind, I answered my own question by looking at the source code. Thank you again.