How to use nvstreamdemux and nvstreammux to process multiple live videos input and output

jetson xavier nx devkit
deepstream5.0
jetpack 4.4
l4t 32.4.2
I’m working on a pipeline that process a few live videos and output them with rstp server respectively.
Now I have a simple test based on deepstream-test3(python),I added a nvstreamdemux after nvdsosd and created two nvegltransform&nveglglessink,the pipeline like :

uridecoderbin1 \ _______________________________________________ / nvegltransform1 ->nveglglessink1
______________ nvstreammux ->nvinfer ->… ->nvdsosd ->nvstreamdemux
uridecoderbin2 / _______________________________________________ \nvegltransform2 ->nveglglessink2

and I got :
Decodebin child added: source

Decodebin child added: source

Decodebin child added: decodebin0

Decodebin child added: decodebin1

Decodebin child added: rtppcmadepay0

Decodebin child added: rtph264depay0

Decodebin child added: alawdec0

Decodebin child added: h264parse0

In cb_newpad

gstname= audio/x-raw
Decodebin child added: capsfilter0

Decodebin child added: nvv4l2decoder0

Seting bufapi_version

Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
In cb_newpad

gstname= video/x-raw
features= <Gst.CapsFeatures object at 0x7f9eeae048 (GstCapsFeatures at 0x7edc08da00)>
reference in DPB was never decoded
Frame Number= 0 Number of Objects= 1 Vehicle_count= 0 Person_count= 1
Segmentation fault (core dumped)

And I wonder how to debug my pipeline when I use deepstream-apps-python?
Is there any method like gst_debug_bin_to_dot_file()?

Seems this is duplicated with How to use nvstreamdemux and nvstreammux to process multiple live videos input and output.