How about the PacketCache and Frame Pool memory using metric after a long run?

If we use deepstream in real system, This question must be answered:

  1. Since the DeepStream manage the frame pool, how can we customise the batch size(frame number) come from the input module? we really need this function as we analysis video stream with some special batch size, or we need invent another tiny frame pool in customer module.
  2. How to calculate or evaluate the memory using of PacketCache and FRAME POOL in DeepStream, can it scale in product environment.
  3. There is only one Flexible Pipeline to do the frame job generated by multiply video stream, Is there a strategy to do the real video overflow control? I mean if the Flexible Pipeline calculate speed is slow than whole real video stream generation, how to handle it, by who?

Hi,

1.
Could you share more information about the batch size mentioned here?
Is it for deep learning inference or the number of decoded frames?

2.
You can use nvprof to monitor GPU memory

3.
In our sample, we show how to get the information of pipeline performance.
And we don’t contain the implementation of the slow pipeline case you mentioned.

Thanks.

Hi,
1.
We have a deep network which detect the human’s action, it analysis 10 frame for batch, and the video fps is setting to 10. Yes, we analysis every one second. It is for deep learning inference module in DeepStream library.

2.

We want to control the Host and GPU memory usage, not only profile them. We hope the PacketCache and FRAME POOL doesn’t consume too much memory.

Then in you detection sample, how many channel 1080p video max supported with one inference instance?

Hi,

  1. If you want to apply inference every three frames, please write a custom module to handle this.

  2. This is managed by the user. It will relate to how many channels are launched.

  3. The input of packetCache is managed by the user.
    In this sample, we demonstrate channel=4 and reach 24-25 frames per second for each.

Thanks.

2.
We can control inference module GPU memory consuming by using inferenceParams.workspaceSize_, what PacketCache and FRAME POOL memory consuming depends, GPU or Host memory?. Let me guess: PacketCache use host memory, and it depends on channel and packet size(average, min, max?). more accurate form: PacketCache Size = size of each one video packet * N packets for each video stream * channel. How about FRAME POOL?
We need to calculate them because we need assemble the hardware and report to the boss: This is the hardware and software configuration(price$) and it got this performance: bla, bla, bla…

Hi,

For inference module, we have several implementations and can automatically select via the maximal workspace value from users.
For decoder module, the implementation is fixed.

More, decoder use GPU memory.
But if the input stream is located in CPU memory, it also consumes some CPU memory.

Thanks.

Thanks, I got a clue to estimate resource requirements for DeepStream.