How to share images between DeepStream and parallel VPI process?

On our Xavier NX I have a VPI process which is working in realtime and a Deepstream inference which is at a lower FPS

Both need to have the same input image/CSI camera, and deepstream has to asynchronously return inference results to the VPI

What would be a good approach to this? Specifically sharing the same input image/CSI camera(s) frames in a performant manner

Many thanks

Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
• The pipeline being used

Ah yes, apololgies;

Hardware: Xavier NX
Deepstream 6.1
jetpack 5.0.2-b231
tensorRT/nvinfer 8.4.1

what is VPI process used to do? what is whole media pipeline you expect?
you can use Deepstream inference 's interval parameter to skip some frames.

Hi fanzh

We need broadcast quality from for the VPI process - so thats capturing 2 4k images from a camera, warping them, grabbing an intersection, and sending to an rtmp endpoint - all at 30fps

The warping homographies are defined by the parallel inference process which is assumed to work at a lower FPS, and feeds back the warp details

is this a configuration that deepstream can work with or should I think of another architecture?

what is the model used to do?

why is the fps is low? it is because of performance issue or you want do inference on every few frames?

  1. yes, it is inference plugin 's property, it is configurable. please find interval in this link: nvinfer

Hi fanzh

I assume the FPS for Yolo on two images will be less than 30fps on our Xavier so I imagined a parallel architecture would be required. We are detecting and following sport players, thus we require detection at the same time as broadcast quality camera output

Thank you for the link, but it is difficult to understand at the moment

you can use deepstream nvinfer and nvtracker plugin, you can use the first one to detection objects, and use the second one to track objects. please refer to sample deeptream-test2 in deepstream sdk, this sample can capture raw data, detect objects, track object.

“interval” is deepstream gstreamer plugin nvinfer’s parameter, if it is 1, nvinfer will do inference on every two frames.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.