Integrating custom inferencing Solution with deepstream-app

• Hardware Platform : GPU
• DeepStream Version: 5.1.0
• TensorRT Version: 7.2.2.3
• NVIDIA GPU Driver Version:470.57.02
• Issue Type: new requirement
• Requirement details : I would like to know whether it is possible to call any REST API (custom inferencing service bundled with triton server) instead of Gst-nvinfer plugin in the deepstream-app pipeline, leaving all other pipeline elements intact? If yes, how can we replace Gst-nvinfer plugin or modify Gst-nvinfer plugin so that we could use our custom solution for inferencing?

https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvinferserver.html

A sample of gst-nvinferserver: deepstream-infer-tensor-meta-test

Hi,

Sorry, not sure I understood. This REST API is a custom REST API, which is doing a part of what gst-nvinferserver is doing. Only way to access the Triton server is through thus custom REST API.

RESTAPI->Preprocessing->gRPCclient->Triton Server->Postprocessing->JSON response

Question is → How & where do we invoke this REST API in gst-nvinfer or gst-nvinferserver plugin and if we have to pass the buffer, meta info passed by upstream plugins to the custom REST API, how and where do we do that?

Pls advise.

Hi rakesh.bhat,

Please help to open a new topic for your issue. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.