I want to use the deepstream pipeline as an inference service. When the picture is received through the network, it will perform inference; then the structured data will be sent out through the network.
The continuous picture used by mulitifilesrc, but this App does not proceed until the picture is received; is there any good way to achieve the above pipeline
After getting the infer data, I don’t know which picture is the infer data, how to solve this problem
• Hardware Platform (Jetson / GPU) • DeepStream Version5.0 • JetPack Version (valid for Jetson only) • TensorRT Version • NVIDIA GPU Driver Version (valid for GPU only)