The queue is empty when processing videos

1.Refer to this FAQ for get element latency.

You can try using nvinferserver grpc mode, and then try to get tritonserver Pending Request Count

How to use nvinferserver in grpc mode, please refer to the following README

/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app-triton-grpc/README

https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/user_guide/metrics.html

 curl localhost:8002/metrics | grep nv_inference_pending_request_count