How to restart a new pipeline after a pipeline has been quitting in deepstream-app

Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson / GPU) GPU
**• DeepStream Version 5.1

Hi, here is a simple demo using deepstream-app and I want to create a new pipeline after the created pipeline has been destroyed. However, my program will crash due to the following debug tracing log after press q in the keyboard to trigger a function to recreate the destroyed pipeline:

1 ?? 0x7fffcc692460
2 nvinfer1::PluginRegistrar::PluginRegistrar() 0x7fff5ddb7858
3 _Z41__static_initialization_and_destruction_0ii 0x7fff5ddb73a1
4 _GLOBAL__sub_I_yoloPlugins.cpp 0x7fff5ddb73d4
5 call_init dl-init.c 72 0x7ffff7de38d3
6 _dl_init dl-init.c 119 0x7ffff7de38d3
7 dl_open_worker dl-open.c 522 0x7ffff7de839f
8 __GI__dl_catch_exception dl-error-skeleton.c 196 0x7ffff48e11ef
9 _dl_open dl-open.c 605 0x7ffff7de796a
10 dlopen_doit dlopen.c 66 0x7ffff4374f96
11 __GI__dl_catch_exception dl-error-skeleton.c 196 0x7ffff48e11ef
12 __GI__dl_catch_error dl-error-skeleton.c 215 0x7ffff48e127f
13 _dlerror_run dlerror.c 162 0x7ffff4375745
14 __dlopen dlopen.c 87 0x7ffff4375051
15 nvdsinfer::DlLibHandle::DlLibHandle(std::string const&, int) 0x7fff8155b8fc
16 std::_MakeUniqnvdsinfer::DlLibHandle::__single_object std::make_unique<nvdsinfer::DlLibHandle, char (&) [4096], int>(char (&) [4096], int&&) 0x7fff8152bc2a
17 nvdsinfer::NvDsInferContextImpl::initialize(_NvDsInferContextInitParams&, void *, void ( *)(INvDsInferContext *, unsigned int, NvDsInferLogLevel, const char *, void *)) 0x7fff8151da2d
18 createNvDsInferContext(INvDsInferContext * *, _NvDsInferContextInitParams&, void *, void ( *)(INvDsInferContext *, unsigned int, NvDsInferLogLevel, const char *, void *)) 0x7fff8152398d
19 gst_nvinfer_start(_GstBaseTransform *) 0x7fff81c034bb
20 ?? 0x7fffca704270
21 ?? 0x7fffca704505
22 ?? 0x7ffff6b906ab
23 gst_pad_set_active 0x7ffff6b91126
24 ?? 0x7ffff6b6ef0d
25 gst_iterator_fold 0x7ffff6b81884
26 ?? 0x7ffff6b6fa16
27 ?? 0x7ffff6b7195e
28 ?? 0x7ffff6b71c8f
29 gst_element_change_state 0x7ffff6b73d5e
30 ?? 0x7ffff6b74499
31 ?? 0x7ffff6b51a02
32 gst_element_change_state 0x7ffff6b73d5e
33 ?? 0x7ffff6b74499
34 ?? 0x7ffff6b51a02
35 gst_element_change_state 0x7ffff6b73d5e
36 gst_element_change_state 0x7ffff6b74045
37 ?? 0x7ffff6b74499
38 start_new_pipeline deepstream_app_main.cpp 862 0x555555563af3
39 main deepstream_app_main.cpp 824 0x555555563965

I want to know how to fix this bug and looking forward to your reply.

Yuhao
deepstream_app_main.cpp (28.9 KB)

Any ideas? A very interesting thing for me is the error does not always happen in my environment. Sometimes the complied program could work without crash.

It seems like the code bug in nvinfer, When I tried to use the engine file without the defined Plugin layer in tensorrt. Everything gonna work without error. Please fix it @Fiona.Chen.

Could you share a simple repro for us?

Yep, just using my given deepstream_app_main.cpp file to replace the original one in the source code and recompile it and execute using ./deepstream-app. When the first pipeline is created you need to press q to trigger the start_pipeline fuction after the first pipeline has been destoryed. Finally, the program will crash due to the nvinfer element.

So you already modified the deepstream_app_main.cpp, can you share the diff file with us?

In additon, can it work well if you use the original deepstream-app without your change?

i can see and download my submitted diff file in the first reply in this page and for the second question my answer is yes because all config and engine file are all you provided as the yolo reference

For the yolo network, now darknet->onnx->tensorrt this path could avoid to use the own defined tensorrt plugin layer in darknet-> tensorrt path. so this question could be temporarily solved. But the darknet-> tensorrt path could manually define the the network layer in tensorrt so for some new network it is still the only way to guarantee the correctness of infer result. So i think is a very serious issue and hope you can fix it as soon as possible

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.