Original pipeline stuck when using interpipesrc

Hi everyone.

I am trying to add dynamic control to my code. So I am modifying deepstream-app-test4, and this is my pipeline

When I set the recording pipeline as

recording_pipe = gst_parse_launch("interpipesrc listen-to=record_h265 format=time allow-renegotiation=true enable-sync=true is-live=true ! queue ! nvmultistreamtiler width=1920 height=1080 rows=2 columns=2 ! nvvideoconvert ! nvv4l2h265enc insert-sps-pps=true iframeinterval=10 bitrate=8000000 ! h265parse ! matroskamux ! filesink name=filesink_record_h265 location=output_h265.avi sync=false async=false", NULL);

and set the pipeline state as GST_STATE_PLAYING. The original pipeline will freeze and I can get only one frame as video output.

But if I set pipeline as

recording_pipe = gst_parse_launch("interpipesrc listen-to=record_h265 format=time allow-renegotiation=true enable-sync=true is-live=true ! queue ! nvmultistreamtiler width=1920 height=1080 rows=2 columns=2 ! nvvideoconvert ! nveglglessink", NULL);

it can display the streaming video and original pipeline’s display will be lagging, but able to play.

This is the interpipesink I used in my code.
g_object_set (G_OBJECT (recording_sink), "name", "record_h265", "forward-events", TRUE, "forward-eos", TRUE, "sync", FALSE, "async", FALSE, "enable-last-sample", FALSE,"drop", TRUE, NULL);

Does anyone have idea about issue?

Thanks in advance

• Hardware Platform (Jetson / GPU): jetson agx xavier
• DeepStream Version:5.0
• JetPack Version (valid for Jetson only):4.4

I solved the problem by changing part of recording pipeline as muxer -> nvvidconv -> nvdsosd -> nvvidconv -> h264encoder.

encoding_pipe = gst_parse_launch("interpipesrc listen-to=stream_h264 format=time allow-renegotiation=true enable-sync=true is-live=true\
                                ! queue ! nvmultistreamtiler width=1920 height=1080 rows=2 columns=2 ! nvvideoconvert ! nvdsosd ! nvvideoconvert \
                                ! nvv4l2h264enc insert-sps-pps=true iframeinterval=10 bitrate=8000000 \
                                ! interpipesink name=encode_h264 forward-events=true forward-eos=true sync=false async=false enable-last-sample=false drop=true ", NULL);

recording_pipe = gst_parse_launch("interpipesrc listen-to=encode_h264 format=time allow-renegotiation=true enable-sync=true is-live=true\
                                ! queue ! h264parse ! matroskamux ! filesink name=record_h264 location=output_h264.mkv sync=false async=false", NULL);

But there is a new problem. The recorded video will have bbox, although the filesink did not connect to nvinfer plugin.

If I use muxer -> nvvidconv -> video/x-raw nv12 or i420 ->h264encoder, the filesink can not work.

So I looked into the sink and src pad of nvv4l2h264enc, nvmultistreammux and nvdsosd.
The sink of nvv4l2h264enc supports video/x-raw(memory:NVMM) with format: { (string)I420, (string)NV12, (string)P010_10LE }

The src of nvmultistreammux supports video/x-raw(memory:NVMM) with format: { (string)NV12, (string)RGBA }
and nvdsosd’s src supports video/x-raw(memory:NVMM) with format: {(string)RGBA }

But when I converted format as nv12 of i420 with nvvideoconvert or capsfilter, the pipeline can not work

Currently the nvh264enc needs “I420” format input.

Can you try the latest DeepStream 5.0 GA version for this issue? You can get it here.https://developer.nvidia.com/deepstream-getting-started

Hi Fiona. Thank you for the previous reply

My colleague tested this on python and also gstreamer command. It did not have bbox from inference pipeline. But in c/c++ version, the osd for connecting tiler and nv264enc will cause problem.

And for the encoder, it can not work if I remove osd and relplaced it with ‘video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)I420’

This is the code I am working now.

forming_pipe = gst_parse_launch("interpipesrc listen-to=stream_h264 format=time allow-renegotiation=true enable-sync=true is-live=true\
                                ! queue ! nvmultistreamtiler width=1920 height=1080 rows=2 columns=2 ! nvvideoconvert \
                                ! 'video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)I420' \
                                ! interpipesink name=deep forward-events=true forward-eos=true sync=false ",NULL);

encoding_pipe = gst_parse_launch("interpipesrc listen-to=deep format=time allow-renegotiation=true enable-sync=true is-live=true\
                                ! queue ! nvvideoconvert ! nvv4l2h264enc insert-sps-pps=true iframeinterval=10 bitrate=8000000 \
                                ! interpipesink name=encode_h264 forward-events=true forward-eos=true sync=false async=false enable-last-sample=false drop=true ", NULL);

recording_pipe = gst_parse_launch("interpipesrc listen-to=encode_h264 format=time allow-renegotiation=true enable-sync=true is-live=true\
                                ! queue ! h264parse ! matroskamux ! filesink name=record_h264 location=/mnt/ssd/record_output_h264.mkv -e sync=false async=false", NULL);

If I turn on these pipelines, the inference will be normal. But the file I saved is 0 byte. The filesink is not working. Unlike nvdsosd( can record video with metadata from inference pipeline)

And I am using deepstream 5.0 GA

interpipesrc and interpipesink have nothing to do with DeepStream.

I don’t know what kind of source you are using, but I have tried the following pipeline, the encoder works well and the output file is OK. It includes all plugins you use in your code except interpipesrc and interpipesink.

The pipeline shows nvv4l2h264enc can work with the caps ‘video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)I420’ well.

gst-launch-1.0 --gst-debug=3 filesrc location=/opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264 ! h264parse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m batch-size=1 width=1920 height=1080 ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/config_infer_primary.txt unique-id=1 ! nvmultistreamtiler width=1920 height=1080 rows=2 columns=2 ! nvvideoconvert ! ‘video/x-raw(memory:NVMM),width=1280,height=720,format=I420’ ! nvv4l2h264enc ! h264parse ! matroskamux ! filesink location=test.mkv