Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
• The pipeline being used
We need broadcast quality from for the VPI process - so thats capturing 2 4k images from a camera, warping them, grabbing an intersection, and sending to an rtmp endpoint - all at 30fps
The warping homographies are defined by the parallel inference process which is assumed to work at a lower FPS, and feeds back the warp details
is this a configuration that deepstream can work with or should I think of another architecture?
I assume the FPS for Yolo on two images will be less than 30fps on our Xavier so I imagined a parallel architecture would be required. We are detecting and following sport players, thus we require detection at the same time as broadcast quality camera output
Thank you for the link, but it is difficult to understand at the moment
you can use deepstream nvinfer and nvtracker plugin, you can use the first one to detection objects, and use the second one to track objects. please refer to sample deeptream-test2 in deepstream sdk, this sample can capture raw data, detect objects, track object.
“interval” is deepstream gstreamer plugin nvinfer’s parameter, if it is 1, nvinfer will do inference on every two frames.