Inferencing multiple models - CGF / Driveworks/ Triton

Please provide the following info (tick the boxes after creating this topic):
Software Version
[+] DRIVE OS 6.0.8.1

Target Operating System
[+] Linux

Hardware Platform
[+] DRIVE AGX Orin Developer Kit (940-63710-0010-300)

Host Machine Version
[+] native Ubuntu Linux 20.04 Host installed with DRIVE OS Docker Containers

Is drive OS supposed to be used with “NVIDIA Triton Inference Server” for easy deployment of multiple models and optimized memory management, etc? Or is there any other alternatives especially which works with CGF. Or sample DNN example available in Driveworks samples is the only one available example?

Please refer to TensorRT Triton server for multiple model instances and see if it helps.

Dear @VickNV,

Thanks for the quick reply, I am aware of this in terms of its use with the Jetson devices. I was asking if in case of Drive OS and Orin dev kit is it advised to use Triton inference server? I have never seen anyone using it or mentioned it on dev forum.
If yes is there any documentation/guideline to use it within CGF? Thanks again.

Best Regards,
Sushant Bahadure

No, Triton Inference Server hasn’t been tested on DRIVE Orin. FYI.