How to analyze video one by one.

Hi,all.
How to analyze viedo one by one:
In our needs, we build a buffer pool,put videos into the pool at all times.

 For a test, we add a while at the deepstream-app sample,in the second loop we got a segment error.

Would you mind to share your code where you add your logic in deepstream-app with us?

Hi bcao.
In our test, only the yolo model would cause the program.

our test code is bellow:

//===============add 20200113=========start
while(1)
{
	return_value = 0;
//===============add 20200113=========end
	for (i = 0; i < num_instances; i++) {
		appCtx[i] = g_malloc0 (sizeof (AppCtx));
		appCtx[i]->person_class_id = -1;
		appCtx[i]->car_class_id = -1;
		appCtx[i]->index = i;
		if (show_bbox_text) {
			appCtx[i]->show_bbox_text = TRUE;
		}

		if (input_files && input_files[i]) {
			appCtx[i]->config.multi_source_config[0].uri =
			g_strdup_printf ("file://%s", input_files[i]);
			g_free (input_files[i]);
		}

		if (!parse_config_file (&appCtx[i]->config, cfg_files[i])) {
			NVGSTDS_ERR_MSG_V ("Failed to parse config file '%s'", cfg_files[i]);
			appCtx[i]->return_value = -1;
			goto done;
		}
	}
	.....
	done:
	g_print ("Quitting\n");
	for (i = 0; i < num_instances; i++) {
		if (appCtx[i]->return_value == -1)
			return_value = -1;
		destroy_pipeline (appCtx[i]);

		g_mutex_lock (&disp_lock);
		if (windows[i])
			XDestroyWindow (display, windows[i]);
		windows[i] = 0;
		g_mutex_unlock (&disp_lock);
		g_free (appCtx[i]);
	}
//===============add 20200113=========start
}
//===============add 20200113=========end

our config file is:

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=1
#gie-kitti-output-dir=streamscl

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=2	
uri=file://mp4-video-path
#drop-frame-interval=2
gpu-id=0
# (0): memtype_device   - Memory type Device
# (1): memtype_pinned   - Memory type Host Pinned
# (2): memtype_unified  - Memory type Unified
cudadec-memtype=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=1
sync=1
source-id=0
gpu-id=0
nvbuf-memory-type=0

[streammux]
gpu-id=0
live-source=0
batch-size=1
batched-push-timeout=40000
width=1920
height=1080
enable-padding=0
nvbuf-memory-type=0

[primary-gie]
enable=1
gpu-id=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=yolov3_model_config_file.txt

with the cammand "gdb deepstream-app core ",we got the error mesage:

#0  0x00007fab54447750 in  () at /home/zalend/TensorRT-5.1.5.0/targets/x86_64-linux-gnu/lib/libnvinfer.so.5
#1  0x00007fab04da3ec8 in nvinfer1::PluginRegistrar<YoloLayerV3PluginCreator>::PluginRegistrar() ()
    at /opt/nvidia/deepstream/deepstream-4.0/lib/libnvdsinfer_custom_impl_Yolo.so
#2  0x00007fab04da3a11 in _Z41__static_initialization_and_destruction_0ii ()
    at /opt/nvidia/deepstream/deepstream-4.0/lib/libnvdsinfer_custom_impl_Yolo.so
#3  0x00007fab04da3a44 in _GLOBAL__sub_I_yoloPlugins.cpp () at /opt/nvidia/deepstream/deepstream-4.0/lib/libnvdsinfer_custom_impl_Yolo.so
#4  0x00007fab5edeb733 in call_init (env=0x556f825c6360, argv=0x7ffe501903f8, argc=3, l=<optimized out>) at dl-init.c:72
#5  0x00007fab5edeb733 in _dl_init (main_map=main_map@entry=0x556faa08d2e0, argc=3, argv=0x7ffe501903f8, env=0x556f825c6360) at dl-init.c:119
#6  0x00007fab5edf01ff in dl_open_worker (a=a@entry=0x7ffe5018f380) at dl-open.c:522
#7  0x00007fab5d0d62df in __GI__dl_catch_exception (exception=exception@entry=0x7ffe5018f360, operate=operate@entry=0x7fab5edefdc0 <dl_open_worker>, args=args@entry=0x7ffe5018f380) at dl-error-skeleton.c:196
#8  0x00007fab5edef7ca in _dl_open (file=0x556fa966d874 "/opt/nvidia/deepstream/deepstream-4.0/lib/libnvdsinfer_custom_impl_Yolo.so", mode=-2147483647, caller_dlopen=0x7fab0fdb0555 <NvDsInferContextImpl::initialize(_NvDsInferContextInitParams&, void*, void (*)(INvDsInferContext*, unsigned int, NvDsInferLogLevel, char const*, char const*, void*))+3649>, nsid=<optimized out>, argc=3, argv=<optimized out>, env=0x556f825c6360)
    at dl-open.c:605
#9  0x00007fab540d5f96 in dlopen_doit (a=a@entry=0x7ffe5018f5b0) at dlopen.c:66
#10 0x00007fab5d0d62df in __GI__dl_catch_exception (exception=exception@entry=0x7ffe5018f550, operate=operate@entry=0x7fab540d5f40 <dlopen_doit>, args=args@entry=0x7ffe5018f5b0) at dl-error-skeleton.c:196
#11 0x00007fab5d0d636f in __GI__dl_catch_error (objname=objname@entry=0x556f820f8f20, errstring=errstring@entry=0x556f820f8f28, mallocedp=mallocedp@entry=0x556f820f8f18, operate=operate@entry=0x7fab540d5f40 <dlopen_doit>, args=args@entry=0x7ffe5018f5b0) at dl-error-skeleton.c:215
#12 0x00007fab540d6735 in _dlerror_run (operate=operate@entry=0x7fab540d5f40 <dlopen_doit>, args=args@entry=0x7ffe5018f5b0) at dlerror.c:162
#13 0x00007fab540d6051 in __dlopen (file=<optimized out>, mode=<optimized out>) at dlopen.c:87
#14 0x00007fab0fdb0555 in NvDsInferContextImpl::initialize(_NvDsInferContextInitParams&, void*, void (*)(INvDsInferContext*, unsigned int, NvDsInferLogLevel, char const*, char const*, void*)) () at /opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_infer.so
#15 0x00007fab0fdbb9c7 in createNvDsInferContext(INvDsInferContext**, _NvDsInferContextInitParams&, void*, void (*)(INvDsInferContext*, unsigned int, NvDsInferLogLevel, char const*, char const*, void*)) () at /opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_infer.so
#16 0x00007fab2029f74f in gst_nvinfer_start(_GstBaseTransform*) () at /usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_infer.so
#17 0x00007fab53c8d270 in  () at /usr/lib/x86_64-linux-gnu/libgstbase-1.0.so.0
#18 0x00007fab53c8d505 in  () at /usr/lib/x86_64-linux-gnu/libgstbase-1.0.so.0
#19 0x00007fab5dc7d69b in  () at /usr/lib/x86_64-linux-gnu/libgstreamer-1.0.so.0
#20 0x00007fab5dc7e116 in gst_pad_set_active () at /usr/lib/x86_64-linux-gnu/libgstreamer-1.0.so.0
#21 0x00007fab5dc5bf0d in  () at /usr/lib/x86_64-linux-gnu/libgstreamer-1.0.so.0
#22 0x00007fab5dc6e874 in gst_iterator_fold () at /usr/lib/x86_64-linux-gnu/libgstreamer-1.0.so.0
#23 0x00007fab5dc5ca16 in  () at /usr/lib/x86_64-linux-gnu/libgstreamer-1.0.so.0
#24 0x00007fab5dc5e95e in  () at /usr/lib/x86_64-linux-gnu/libgstreamer-1.0.so.0
#25 0x00007fab5dc5ec8f in  () at /usr/lib/x86_64-linux-gnu/libgstreamer-1.0.so.0
#26 0x00007fab5dc60d5e in gst_element_change_state () at /usr/lib/x86_64-linux-gnu/libgstreamer-1.0.so.0
#27 0x00007fab5dc61499 in  () at /usr/lib/x86_64-linux-gnu/libgstreamer-1.0.so.0
#28 0x00007fab5dc3ea02 in  () at /usr/lib/x86_64-linux-gnu/libgstreamer-1.0.so.0
#29 0x00007fab5dc60d5e in gst_element_change_state () at /usr/lib/x86_64-linux-gnu/libgstreamer-1.0.so.0
#30 0x00007fab5dc61499 in  () at /usr/lib/x86_64-linux-gnu/libgstreamer-1.0.so.0
#31 0x00007fab5dc3ea02 in  () at /usr/lib/x86_64-linux-gnu/libgstreamer-1.0.so.0
#32 0x00007fab5dc60d5e in gst_element_change_state () at /usr/lib/x86_64-linux-gnu/libgstreamer-1.0.so.0
#33 0x00007fab5dc61045 in gst_element_change_state () at /usr/lib/x86_64-linux-gnu/libgstreamer-1.0.so.0
#34 0x00007fab5dc61499 in  () at /usr/lib/x86_64-linux-gnu/libgstreamer-1.0.so.0
#35 0x0000556f8058cd5a in main (argc=1, argv=0x7ffe501903f8) at deepstream_app_main.c:

Hi bcao.
How do we solve the problem?

We’r checking internally and will give u response ASAP.
BTW, I cannot get “In our needs, we build a buffer pool,put videos into the pool at all times.”, what is the buffer pool?

hi bcao.
It is just a queue.
We put videos into the queue at all times in a thread, check the queue in the main thread.
plan 1.
if the queue is not empty we create a pipeline ,and then destroy it by gst_object_unref when get the eos message.
plan 2.
if the queue is not empty, we change the uri properties of plugin 'uridecodebin when get the eos message,and then set it to play “gst_element_set_state(appCtx->pipeline.pipeline, GST_STATE_PLAYING);”

for the plan 2.
we change the code of deepstream-app sample as bellow for test,and got a deadlock.

case GST_MESSAGE_EOS:
            {
                /*
                 * In normal scenario, this would use g_main_loop_quit() to exit the
                 * loop and release the resources. Since this application might be
                 * running multiple pipelines through configuration files, it should wait
                 * till all pipelines are done.
                 */
                NVGSTDS_INFO_MSG_V("Received EOS. Exiting ...\n");
#if 0
                appCtx->quit = TRUE;
                return FALSE;
#else
				gst_element_set_state(appCtx->pipeline.pipeline, GST_STATE_NULL);
				gst_element_set_state(appCtx->pipeline.pipeline, GST_STATE_PLAYING);
#endif
                break;
            }

Are you looking for dynamic source switching functionality, I would suggest you refer https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps/tree/master/runtime_source_add_delete

Hi,
As you said above, there are two threads - “We put videos into the queue at all times in a thread, check the queue in the main thread.

Regarding the thread - “We put videos into the queue at all times in a thread”, will this thread save the video into local file ? Since, from the code you shared above, the other thread is reading from a local file.
Regarding the thread - “check the queue in the main thread.”, is it just a deepstream code which read the local file generated by the firsr thread and do some detection/classification with YoloV3?

So, I have below further questions:

  1. Why don’t feed the input video into deepstream directly?
  2. Keeping your current implementation, if you use DeepStream YoLOv3 sample to read the local file and do some detection/classification, will you still run the failure you reported above?

Thanks!

hi bacao.
the example is for ‘nvmultistreamtiler’,can you provide some examples with ‘streamdemux’.

when I used the code as bellow,I always got a error message ‘Failed to get the request pad’;

g_snprintf(pad_name, 15, "src_%u", source_id);
	srcpad = gst_element_get_request_pad(streamdemux, pad_name);
	if(srcpad == NULL)
	{
		g_printerr("Failed to get the request pad\n");
		return FALSE;
	}
	gst_object_unref(srcpad);

hi mchi:
for the test code,the input video was fed into deepstream directly;
we just add a while loop to the deepstream-app sample, change the model to yolov3. the failure is always hit in the second loop.

for the test code,the input video was fed into deepstream directly;
we just add a while loop to the deepstream-app sample, change the model to yolov3. the failure is always hit in the second loop.
If you feed the video into DeepStream applicaiton directly, why do you add a while loop in the sample?

Hi mchi.
We have thousands of videos to analyze.and we want to save the result of each video dependently.
We are looking for dynamic source switching functionality with the plugin ‘nvstreammux’ and ‘nvstreamdemux’.

Hi 2251482984,

Have you managed to get requirement implemented? Any result can be shared?

Thanks