YOLO inference library malfunction restarting pipeline

• Hardware Platform: TX2
• DeepStream Version: 5.0 GA
• TensorRT Version: 7.0	
• JetPack 4.5.1

Hi
we are using an inference pipeline with below components used also in “deepstream-app”

inputComponent -> nvstreammux -> primaryGieBin (nvInfer with yoloPlugin) -> nvTrackerBin -> nvStreamDemux  -> OutputComponents

It happen that
-) we start previuos pipeline 1st time and inject videostrem
-) in this way we get metadata of objects detected and we can use them like we need
-) so pipelines component works correctly and all is ok
-) then we destroy inference pipeline and restart again the same pipeline
-) but during pipeline starting we get below BackTrace

trying to find some solution we see that at following link was filed a very similar situation

https://forums.developer.nvidia.com/t/yolo-inference-library-issue/154891

So in file
yoloPlugins.cpp
there is this macro
REGISTER_TENSORRT_PLUGIN(YoloLayerV3PluginCreator);

and it seems there is some static variable inside it that should be at the basis of below crash,
But it is not clear what is the solution proposed in the thread,

May you suggest the correct way to run more times same inference pipeline and to avoid below crash

thanks for support,
M.

Thread 1 "AnalyticService" received signal SIGSEGV, Segmentation fault.
0x0000007f831a6f94 in ?? () from /usr/lib/aarch64-linux-gnu/libnvinfer.so.7
(gdb) bt
#0  0x0000007f831a6f94 in ?? () from /usr/lib/aarch64-linux-gnu/libnvinfer.so.7
#1  0x0000007f78672a50 in nvinfer1::PluginRegistrar<YoloLayerV3PluginCreator>::PluginRegistrar() () from /opt/crs/zTest_AS/inferNetwork__Tiny_face_det/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
#2  0x0000007f786724f0 in __static_initialization_and_destruction_0 ()   from /opt/crs/zTest_AS/inferNetwork__Tiny_face_det/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
#3  0x0000007f7867252c in _GLOBAL__sub_I_yoloPlugins.cpp () from /opt/crs/zTest_AS/inferNetwork__Tiny_face_det/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
#4  0x0000007fb7fdea34 in call_init (l=<optimized out>, argc=argc@entry=1, argv=argv@entry=0x7ffffff3b8, env=env@entry=0x55555cdce0) at dl-init.c:72
#5  0x0000007fb7fdeb38 in call_init (env=0x55555cdce0, argv=0x7ffffff3b8, argc=1, l=<optimized out>) at dl-init.c:118
#6  _dl_init (main_map=main_map@entry=0x557b8974e0, argc=1, argv=0x7ffffff3b8,    env=0x55555cdce0) at dl-init.c:119
#7  0x0000007fb7fe2cd8 in dl_open_worker (a=0x7fffffce88) at dl-open.c:522
#8  0x0000007fb7c71694 in __GI__dl_catch_exception (exception=0xfffffffffffffffe, operate=0x7fffffccac, args=0x7fffffce70) at dl-error-skeleton.c:196
#9  0x0000007fb7fe2418 in _dl_open (  work__Tiny_face_det/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so", mode=-2147483647, caller_dlopen=0x7faa1e97e4, nsid=-2, argc=1, argv=0x7ffffff3b8, env=<optimized out>) at dl-open.c:605
#10 0x0000007fb7a85014 in dlopen_doit (a=0x7fffffd148) at dlopen.c:66
#11 0x0000007fb7c71694 in __GI__dl_catch_exception (exception=0x7fb7ffe7a8 <__stack_chk_guard>, exception@entry=0x7fffffd0e0, operate=0x7fffffcf3c, args=0x7fffffd0c0) at dl-error-skeleton.c:196
#12 0x0000007fb7c71738 in __GI__dl_catch_error (objname=0x5555706100, errstring=0x5555706108,  mallocedp=0x55557060f8, operate=<optimized out>, args=<optimized out>) at dl-error-skeleton.c:215
#13 0x0000007fb7a86780 in _dlerror_run (operate=operate@entry=0x7fb7a84fb0 <dlopen_doit>, args=0x7fffffd148, args@entry=0x7fffffd158) at dlerror.c:162
#14 0x0000007fb7a850e8 in __dlopen (file=<optimized out>, mode=<optimized out>) at dlopen.c:87
#15 0x0000007faa1e97e4 in ?? () from /opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_infer.so
#16 0x0000007faa1d6640 in ?? () from /opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_infer.so
#17 0x0000007faa1d70a0 in createNvDsInferContext(INvDsInferContext**, _NvDsInferContextInitParams&, void*, void (*)(INvDsInferContext*, unsigned int, NvDsInferLogLevel, char const*, void*)) ()   from /opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_infer.so
#18 0x0000007faa5599c4 in ?? ()  from /usr/lib/aarch64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_infer.so
#19 0x0000007fb7969224 in ?? () from /usr/lib/aarch64-linux-gnu/libgstbase-1.0.so.0
#20 0x000000557d8c30d0 in ?? ()

where is your yolov3 model?

“yolov3 model” for us is engine file “model_b2_gpu0_fp16.engine” obtained starting from models files yolov3-tiny_face.cfg yolov3-tiny_face.weights

Both file.engine and yoloV3-model are inside the directory specified in variables and config file.
So in this case it is in “./inferNetwork__Tiny_face_det/”
/opt/crs/zTest_AS/inferNetwork__Tiny_face_det/
and test application is run in /opt/crs/zTest_AS/

But consider that wrong behavior is get only the 2nd time we start analytic pipeline.
So position of yolov3.engine or yolov3.model should not have a role, because the first time it works correctly.

Hi @mgalimberti ,
Looks the same as YOLO inference library issue - #7 by geralt_of_rivia

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.