Visualization tool that shows the TensorRT Inference Server Metrics

Hi all, is there some kind of visualization tool to visualize the TensorRT Inference Server performance metrics as shown in the webinar Maximizing GPU Utilization for Data Center Inference with NVIDIA TensorRT Inference Server on GKE with Kubeflow

Hello,

prometheus and grafana were used to visualize the demo.

thanks @NVES, is there some example that shows how to extract the metrics performance from TRTIS and visualize these with prometheus and grafana?

hi NVES, does Nvidia have some pre-built dashboard in Grafana to plot the TRTIS metrics performance as shown in the demo? if not, what metric was used to differentiate the models running on each GPU? See, the enclosed picture where the models are represented in different colors, what metric was it?