One unavailable http stream causes pipeline crash

• Hardware Platform (GPU) GPU
• DeepStream Version 5.1 or 6.2
• TensorRT Version 7.2.2 for DS 5.1 and 8.5.2.2 for DS 6.2
• NVIDIA GPU Driver Version (valid for GPU only) 535
• Issue Type( questions, new requirements, bugs)

If one of live stream sources (which is a http stream) is down (camera cannot be read), the whole deepstream pipeline crashes. I wish that the remaining sources, which can still be read would be further processed by deepstream.

Here is log when the app crashes:

(deepstream-app:1): GLib-GObject-WARNING **: 07:01:52.847: value "0" of type 'guint' is invalid or out of range for property 'gie-id-to-blur' of type 'guint'
Unknown or legacy key specified 'network-input-order' for group [property]
No protocol specified
0:00:01.891498265     1 0x563acb782990 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1702> [UID = 2]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream/sources/apps/main/classifier/efficientnet.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT input           3x224x224
1   OUTPUT kFLOAT output          72
0:00:01.891607755     1 0x563acb782990 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<secondary_gie_0> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1806> [UID = 2]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream/sources/apps/main/classifier/efficientnet.engine
0:00:01.893258894     1 0x563acb782990 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<secondary_gie_0> [UID 2]: Load new model:/opt/nvidia/deepstream/deepstream/sources/apps/main/configs/config_infer_sec.txt sucessfully
Deserialize yoloLayer plugin: yolo_140
Deserialize yoloLayer plugin: yolo_151
Deserialize yoloLayer plugin: yolo_162
0:00:02.516702147     1 0x563acb782990 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1702> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream/sources/apps/main/yolo/checkpoint/model_b2_gpu0_fp16.engine
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 4
0   INPUT  kFLOAT data            3x256x256
1   OUTPUT kFLOAT yolo_140        18x32x32
2   OUTPUT kFLOAT yolo_151        18x16x16
3   OUTPUT kFLOAT yolo_162        18x8x8
0:00:02.516757168     1 0x563acb782990 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1806> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream/sources/apps/main/yolo/checkpoint/model_b2_gpu0_fp16.engine
0:00:02.518318668     1 0x563acb782990 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream/sources/apps/main/configs/config_infer_primary.txt sucessfully
Runtime commands:
	h: Print this help
	q: Quit
	p: Pause
	r: Resume
** INFO: <bus_callback:181>: Pipeline ready
(deepstream-app:1): GLib-GObject-WARNING **: 07:01:55.442: g_object_set_is_valid_property: object class 'GstNvJpegDec' has no property named 'DeepStream'
(deepstream-app:1): GLib-GObject-WARNING **: 07:01:55.442: g_object_set_is_valid_property: object class 'GstNvJpegDec' has no property named 'DeepStream'
Using GPU 0 (NVIDIA GeForce RTX 3050, 20 SMs, 1536 th/SM max, CC 8.6, ECC off)
Using GPU 0 (NVIDIA GeForce RTX 3050, 20 SMs, 1536 th/SM max, CC 8.6, ECC off)
** INFO: <bus_callback:167>: Pipeline running
0:00:03.898127911     1 0x563b1b5a30f0 ERROR         nvvideoconvert gstnvvideoconvert.c:3387:gst_nvvideoconvert_transform: buffer transform failed
ERROR from source: Internal data stream error.
Debug info: gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline/GstBin:multi_src_bin/GstBin:src_sub_bin1/GstURIDecodeBin:src_elem/GstSoupHTTPSrc:source:
streaming stopped, reason error (-5)
Quitting
App run failed

Here is my config.txt:

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=30

[tiled-display]
enable=0
rows=1
columns=1
width=640
height=480
gpu-id=0
nvbuf-memory-type=0

[source0]
enable=1
type=2
uri=http://10.23.37.136:8000/camera/mjpeg
camera-id=274
num-sources=1
gpu-id=0
cudadec-memtype=0
roi=738:117:370:361

[source1]
enable=1
type=2
uri=http://10.23.37.135:8000/camera/mjpeg
camera-id=275
num-sources=1
gpu-id=0
cudadec-memtype=0
roi=757:127:370:332

[source2]
enable=1
type=2
uri=http://10.23.37.132:8000/camera/mjpeg
camera-id=276
num-sources=1
gpu-id=0
cudadec-memtype=0
roi=805:114:366:336

[sink0]
enable=1
type=1
sync=0
source-id=0
gpu-id=0
nvbuf-memory-type=1

[sink1]
enable=1
type=1
sync=0
source-id=1
gpu-id=0
nvbuf-memory-type=1

[sink2]
enable=1
type=1
sync=0
source-id=2
gpu-id=0
nvbuf-memory-type=1

[osd]
enable=1
gpu-id=0
border-width=2
text-size=20
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
live-source=0
batch-size=3
batched-push-timeout=40000
width=416
height=416
enable-padding=0
nvbuf-memory-type=0

[primary-gie]
enable=1
gpu-id=0
batch-size=3
gie-unique-id=1
process-mode=1
nvbuf-memory-type=0
config-file=config_infer_primary.txt

[secondary-gie0]
enable=1
process-mode=2
gpu-id=0
batch-size=1
gie-unique-id=2
operate-on-gie-id=1
operate-on-class-ids=0;
config-file=config_infer_sec.txt

[tests]
file-loop=0

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

Have tried with the latest DeepStream 6.3 and DeepStream 6.4? It work well with multiple http streams.

Please set the nvstreammux “batch-size” the same as the source number.

source2_test.txt (4.3 KB)

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.