I am using nvinferserver with ensemble model running in separate standalone triton server. Deepstream and Triton server are communicating over GRPC.
Triton Server is running following models:
a. YoloX model,
b. pre-processing,
c. post-processing,
d. ensemble_yolox
Could I share source-id of deepstream pipeline along with the frame to python post-processing (python-backend) model? If yes could you please let me know how?
Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) • DeepStream Version • JetPack Version (valid for Jetson only) • TensorRT Version • NVIDIA GPU Driver Version (valid for GPU only) • Issue Type( questions, new requirements, bugs) • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
There are lots of sample of how to get NvDsFrameMeta from deepstream pipeline. E.G. /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test1