How to run Inference (PGIE) and Optical Flow correctly in the same Pipeline?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU): Jetson Xavier AGX
• DeepStream Version: 5.1 (Python)
• JetPack Version (valid for Jetson only): 4.5.1-b17
• Issue Type( questions, new requirements, bugs): Question

Hi, I’m trying to include in the same pipeline the plugins “nvinfer + nvtracker + nvdsanalytics” and the “nvof + nvofvisual”. My goal is to get the detections from the video source and then stream this procesed video with painted bounding boxes over RTSP. Aditionally I want to get information from Optical Flow vectors to internally process them in the application. I tried with the following pipeline:

However in the RTSP stream the bounding boxes are painted over the optical flow visualization, as can be seen in this video.

Is there any way to get “RGB” video with bounding boxes in the RSTP stream and simultaneously get/process the Optical Flow vectors inside the application?

PD: The most related question that I found is this “Can’t extract optical flow metadata when using nvdsanalytics plugin in the pipeline” but there not solution.

Do you mean you need the output both have opticalflow result and inference result in the same video? If so, there is no way to do it. You can try to get NvDsOpticalFlowMeta(https://docs.nvidia.com/metropolis/deepstream/sdk-api/structNvDsOpticalFlowMeta.html) with nvof to draw by yourself instead of using nvofvisual.

In the output I need to display the inference results only on the “RGB” frames (over RTSP). Internally I need to have the data from nvof and from pgie. I’ll try what you mention, thanks for replying.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.