Is there a performance comparison result with other applications?

Are there any results comparing the performance of DeepStream with other applications?
We confirmed the official site of DeepStream and built a verification environment.
I understood the goodness of DeepStream, but since there is no comparison with other applications, I am considering whether to introduce it.

One way to measure is to see %sm %dec utilization rate by the below command in tesla platform.

$ nvidia-smi dmon

Higher utilization, the performance is better.

Are there any measurement results for applications other than DeepStream and DeepStream?

Are there any specific results such as “Low memory usage” or “Short inference time” compared to applications other than DeepStream?

You can refer to GitHub - triton-inference-server/server: The Triton Inference Server provides an optimized cloud and edge inferencing solution.