Does DS-Triton Support Prometheus metrics for monitoring

Does the integration DeepStream-Triton support Prometheus metrics for monitoring?

could you share more info about Prometheus metrics? what’s your platform?

Hi @mchi thanks for your reply, my server has T4’s GPUs. Does DS-Triton Integration support the same or similar Prometheus metrics as the Triton standalone version does?, see here server/metrics.md at main · triton-inference-server/server · GitHub

Hi @virsg
You can use below methods to get the metrics data.

  1. nvidia-smi
    $ nvidia-smi dmon -s ucmtp // get GPU, memory utilization and some other infos as below
    gpu sm mem enc dec mclk pclk fb bar1 rxpci txpci pwr gtemp mtemp
    Idx % % % % MHz MHz MB MB MB/s MB/s W C C
    0 0 0 0 0 405 300 103 3 0 0 9 20 -
    1 0 1 0 0 405 210 2 2 0 0 5 21 -
    2 0 0 0 0 405 455 2 2 0 0 6 22 -

  2. about latency, you can refer to Troubleshooting — DeepStream 6.1.1 Release documentation to capture the DS-Triton latency log