Nvml overhead on latency of applications running on the GPU

I am planning to query the status of my Gpu every 1 second using NVML while an inference process is running on the GPU.
Does querying gpu status through NVML have any side effect on the latency of the inference? I do not want the inference latency to be affected.
Inference latency implies time it takes to run a machine learning model on the gpu with one batch of inputs.

Thank you,