segmentation fault( core dumped) while using deepstream with our own model

Hi,
I just use my own detection model with deepstream-app on tx2. I found that if I use one source, deepstream-app will run successful. But if I use two sources, deepstream-app will run only a few frames and then abort with “Segmentation fault (core dumped).” Does anyone know how to solve this? Thanks.

Hi,

Would you mind to share your configure file with us so we can check it further?

Thanks.

Hi AastaLL,
Here are my configure files.
This is my deepstream app configure file.

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=1
gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=1
columns=2
width=1280
height=720
gpu-id=0
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=3
num-sources=1
uri=file://../../samples/streams/sample_1080p_h264.mp4
gpu-id=0
cudadec-memtype=0

[source1]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=3
uri=file://../../samples/streams/sample_720p.mp4
num-sources=1
gpu-id=0
# (0): memtype_device   - Memory type Device
# (1): memtype_pinned   - Memory type Host Pinned
# (2): memtype_unified  - Memory type Unified
cudadec-memtype=0

[streammux]
gpu-id=0
batch-size=1
batched-push-timeout=-1
## Set muxer output width and height
width=1920
height=1080
nvbuf-memory-type=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=1
source-id=0
gpu-id=0

[sink1]
enable=0
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265
codec=1
sync=1
#iframeinterval=10
bitrate=2000000
output-file=out.mp4
source-id=0

[osd]
enable=1
gpu-id=0
border-width=3
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[primary-gie]
enable=1
gpu-id=0
batch-size=1
gie-unique-id=1
interval=0
labelfile-path=ssd_coco_labels.txt
model-engine-file=sample_centernet_1149.uff_b1_fp32.engine
config-file=config_infer_primary_centernet.txt
nvbuf-memory-type=0

And my infer primary configure file is the following.

[property]
gpu-id=0
net-scale-factor=0.0078431372
offsets=127.5;127.5;127.5
model-color-format=0
model-engine-file=sample_centernet_1149.uff_b1_fp32.engine
labelfile-path=ssd_coco_labels.txt
uff-file=sample_centernet_1149.uff
uff-input-dims=3;384;384;0
uff-input-blob-name=img_tensor
batch-size=1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
num-detected-classes=5
interval=0
gie-unique-id=1
is-classifier=0
output-blob-names=cdet_inference_model/head_heatmap_out/Sigmoid;cdet_inference_model/head_heatmap_maxpooling_out/MaxPool;cdet_inference_model/head_offset_out/Conv2D;cdet_inference_model/head_size_out/Conv2D
parse-bbox-func-name=NvDsInferParseCustomCenterNet
custom-lib-path=nvdsinfer_custom_impl_centernet/libnvdsinfer_custom_impl_centernet.so

[class-attrs-all]
threshold=0.5
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0

## Per class configuration
#[class-attrs-2]
#threshold=0.6
#roi-top-offset=20
#roi-bottom-offset=10
#detected-min-w=40
#detected-min-h=40
#detected-max-w=400
#detected-max-h=800

Thank you so much.

Hi,

We try to reproduce this issue on our environment.
Could you help to check if this issue also occurs with our official model if two input sources?

Thanks.

Hi AastaLLL,

I’ve tried your official model with two input sources, the issue didn’t occur.

Thanks.