Run Deepstream without pgie or tracker

Please provide complete information as applicable to your setup.

• Hardware Platform: Jetson Nano Developer Kit
• DeepStream Version: 5.0
• JetPack Version: 4.4
• TensorRT Version: 7.1.0

Problem description:
Hi Sir/Madam:
Currently, I am trying to run the deepstream without object detection and tracking module (pgie and tracker).
The original pipeline works good, which looks like this: rtsp stream input -> streammux -> pgie -> tracker -> nvvidconv -> filter (video/x-raw(memory:NVMM), format=RGBA) -> tiler -> nvvidconv -> nvosd -> tee -> …
Now, I hope to remove pgie and tracker modules, but just use the basic video processing pipeline (because current project no need for object detection and tracking). What I did is just remove those two modules, as followed:
rtsp stream input -> streammux -> nvvidconv -> filter (video/x-raw(memory:NVMM), format=RGBA) -> tiler -> nvvidconv -> nvosd -> tee -> …

After those 2 modules removed, the rtsp stream can still be displayed locally (since I used nveglglssink). However, there’s a huge delay, may around 20 seconds or more.
But if I add those modules back, those delays disappeared.

Here are the input and output of several modules of this video pipeline:

I am wondering whether you can help.
Thanks a lot for your help.

The following pipeline can work, no delay is observed:
gst-launch-1.0 nvstreammux name=m batched-push-timeout=40000 batch-size=2 width=960 height=540 live-source=1 ! queue ! nvvideoconvert ! ‘video/x-raw(memory:NVMM), format=RGBA’ ! nvmultistreamtiler rows=1 columns=2 width=1280 height=720 ! nvvideoconvert ! nvdsosd ! nvegltransform ! nveglglessink qos= true sync=false async=false rtspsrc location=rtsp://xxxxxx ! rtph264depay ! nvv4l2decoder ! m.sink_0 rtspsrc location=rtsp://xxxxxx ! rtph264depay ! nvv4l2decoder ! m.sink_1

You may need to check your implementation.

Hi Fiona:
Thanks a lot for your reply. I have found the reason. I tried to use cv2.resize to resize each frame, which is super slow.