Tee makes inference pipeline goes to 0 fps

• Hardware Platform (Jetson / GPU) Jetson Xavier
• DeepStream Version DS 6.0
• JetPack Version (valid for Jetson only) 4.6 (L4T 32.6.1)
• TensorRT Version 8.0.1
**• Issue Type question

Hello!

I am running an inference pipeline and i want to add a tee to save h264 encoded videos with a filesink, before they go to the nvstreammux.
This is the pipeline:

gst-launch-1.0 -v rtspsrc ! rtph264depay ! h264parse ! nvv4l2decoder ! videorate ! "video/x-raw(memory:NVMM),framerate=10/1,format=NV12" ! tee name=t t. ! queue ! nvv4l2h264enc ! h264parse ! filesink location='/test2.h264' t. ! queue ! "video/x-raw(memory:NVMM)" ! mux.sink_0 nvstreammux width=1920 height=1080 batch-size=1 name=mux ! queue ! nvinfer ! nvmultistreamtiler ! nvvideoconvert ! fakesink

And it runs just fine from the terminal.
But when I am implementing it in python script the main inference pipeline just produces frames with near to 0 fps. (Maybe one)

Pipeline linking:

    def link_elements_in_pipeline(self):
        self.h264depayloader.link(self.h264parser)
        self.h264parser.link(self.h264decoder)
        self.h264decoder.link(self.videorate)
        self.videorate.link(self.capsfilter_to_framerate)

        self.capsfilter_to_framerate.link(self.tee)

        tee_src_pad_record = self.tee.get_request_pad("src_0")
        record_queue_sink_pad = self.record_queue.get_static_pad("sink")
        tee_src_pad_record.link(record_queue_sink_pad)
        self.record_queue.link(self.h264_encoder)
        self.h264_encoder.link(self.h264parser_record)
        self.h264parser_record.link(self.filesink)       


        tee_srcpad_main_pipeline = self.tee.get_request_pad("src_1")
        pre_nvstreammux_queue_sinkpad = self.pre_nvstreammux_queue.get_static_pad("sink")
        tee_srcpad_main_pipeline.link(pre_nvstreammux_queue_sinkpad)

        pre_nvstreammux_queue_sourcepad = self.pre_nvstreammux_queue.get_static_pad("src")
        mux_sinkpad = self.streammux.get_request_pad("sink_0")
        pre_nvstreammux_queue_sourcepad.link(mux_sinkpad)

        # Main pipeline
        self.streammux.link(self.pre_primary_inference_queue)
        self.pre_primary_inference_queue.link(self.pgie)
        self.pgie.link(self.nvvideo_converter_after_pgie)
        self.nvvideo_converter_after_pgie.link(self.capsfilter_to_RGBA_format)
        self.capsfilter_to_RGBA_format.link(self.pre_tiled_frames_queue)
        self.pre_tiled_frames_queue.link(self.tiler)
        self.tiler.link(self.sink)

        self.tiler_sink_pad = self.tiler.get_static_pad("sink")
        if self.tiler_sink_pad:
            self.tiler_probe_id = self.tiler_sink_pad.add_probe(
                Gst.PadProbeType.BUFFER,
                self.tiler_sink_pad_buffer_probe_metadata_extraction,
                0,
            )

Do you have any ideas what the reason can be?

You may need to add nvvideoconvet at each branch after tee.
Actually, the GstBuffer in one branch maps to the same memory of the other branch, when there is any change in one branch, the change is reflected in the other branch.

However, from your description, I guess you need smart-recording

https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_Smart_video.html

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.