Sample program for using triton inference

Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson / GPU)**RTX3080
• DeepStream Version6.4
I am wondering whether deepstream-app can be used together with triton local inference. My plan is as follows.
(1)Use deepstream-app upto stream-mux so that my config file has only sources and streammux.
(2)PGIE, Tracker and SGIEs are done using triton local inference.
(3)Then all metas are pushed back to osd for display or we can use rtsp for remote display.
Is it possible?
All inference and tracking are done in TensorRT.
Because using deepstream pipeline has limitations for making changes.
Using that way we have more freedom to make changes to code.

We have benefits like we can do multiple instances of model if required, etc. Now I even have difficulties to make changes of objectmetas for sgies. Pls see in my earlier queries.
Any samples for that?

yes, please refer to nvinferserver. which leverage triton to do inference.
please refer to deepstream-test1 for nvinferserver sample.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.