• Hardware Platform (Jetson / GPU): GPU • DeepStream Version: 7.1 (python) • TensorRT Version: 10.3.0.26 • NVIDIA GPU Driver Version (valid for GPU only): 555.58.02 • Issue Type( questions, new requirements, bugs): questions
Hello. I have a few questions about working with the queue.
Where are the frames that are in the queue stored? Are they on RAM or GPU memory?
How do I see how many frames are in the queue? In python deepsteam. I used .add_probe for src and sink pads, and looked at the attributes current-level-time, current-level-bytes. But they always returns 0. For reading, I use nvmultiurisrcbin, and after that there is a queue and then nvinferserver.
The answer to this question depends on the type of your GstBuffer. If it is an NVMM type buffer, the video frame is stored on the GPU memory, but the nvbufsurface is stored as a handle on the CPU. Other types of buffers are stored in RAM.
It is impossible to know exactly, current-level-bytes is 0, which means there is no delay in data processing.
Thanks for the reply. As for the problem with the current-level-bytes is 0. There must be a delay in processing the data, as messages on the triton server take a very long time to be sent. Accordingly, because of this, the queue should fill up very quickly, since nvmultiurisrcbin continues to read frames.
A queue is the thread boundary element through which you can force the use of threads. Although triton has a delay, it does not mean that data will be cached in the queue
Apologize for the long answer. I have a deep stream pipeline that processes live video streams. And I would like to watch on these cases when there is a delay between the reader module and inference module. Between these two modules I have a queue (nvmultiurisrcbin-> queue->nvinferserver) .
Which, if I understand correctly, just accumulates frames in the event that nvinferserver does not have time to process later frames?
And I wanted to keep track of how many frames are currently in the queue. Can you tell me if this is possible?