How sub-batches will work with maximum stream!

Please provide complete information as applicable to your setup.

**• Hardware Platform -------> GPU
**• DeepStream Version -------> 6.4
**• TensorRT Version -------------> 8.5
**• NVIDIA GPU Driver Version ----------> 545

I am experimenting with “sub-batches” tracker feature,
I have 10 stream and each stream having 100 object,
what I’ve done is each stream I created one sub-batches (10 stream , 10 sub-batches). while running after some time I’m getting this error
“gstnvtracker: ConvBuf status is not available for active batch!
gstnvtracker: completed req is not in active batch!
Error: gst-stream-error-quark: Failed to submit input to tracker (1), gstnvtracker.cpp(706): gst_nv_tracker_submit_input_buffer ():”
Why this error is coming.
I want to use sub-batches very wisely where performance and accuracy will be balance.
When I have set sub-batches 3 ( 3streams, 3streams, 4 streams). It’s working good.
How do I measure that sub-batches per-stream will work good.

If you have list or tested measurement please share that !

Could you attach your whole pipeline and the config file?
Since you want to use the sub-batching, could you upgrade your DeepStream version to 7.0?

Hi @yuweiw ,

I’m using 6.4 as of now, Why you are saying to upgrade ? is batch process is not available in 6.4 ?
I did not see any different documentation change between 6.4 vs 7.0 for sub-batches and tacker.
I’m attaching my pipeline graph as well.

And my tracker.txt looks like this

[tracker]
tracker-width=516
tracker-height=516
gpu-id=0
ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
ll-config-file=tracker.yml;tracker_nvsort.yml;tracker_dcf_max_perf.yml;tracker.yml;tracker_nvsort.yml;tracker_dcf_max_perf.yml;tracker.yml;tracker_nvsort.yml;tracker_dcf_max_perf.yml;tracker.yml
sub-batches=[0;1;2;3;4;5;6;7;8;9]
#enable-past-frame=1
#enable-batch-process=1
sub-batch-err-recovery-trial-cnt=3

Yes, it’s available in 6.4 version. But if you want to use deepstream-app, sub-batch parsing was not added in the deepstream-app source code in DS 6.4.

Could you remove the bracket for sub-batch parameter in the config file?

Hi @yuweiw

In Deepstream documentation
“”“”
Option 1 : Semicolon delimited integer array where each number corresponds to source id.

Must include all values from 0 to (batch-size -1) where batch-size is configured in [streammux].

Option 2 : Colon delimited integer array where each number corresponds to size of a sub-batch (i.e. max number of stream a sub-batch can accommodate)
“”"
tracker.txt
here it’s not taking the config, I’m setting the property like this
new_model_element.set_property(‘sub-batches’,“0;1;2;3;4;5;6;7;8;9”)

It’s running But after some time it’s showing above error and stopping. I want understand sub-batching with proper utilisation, and one sub-batch how many stream can handle and what will be the appropriate way to use batch-processing.

I cannot reproduce your problem on my side. Since the code is open source, you can add some log to do a preliminary analysis.

sources\gst-plugins\gst-nvtracker

You can refer to our Guide Sub-batching (Alpha) Setup and Usage of Sub-batching (Alpha).

Hello @yuweiw !

I have gone through the docs …

Can u help us to understand how many sub-batches can be supported by one tracker,

Pls provide us performance metric so that we can understand should we go with sub batches or multiple tracker instances fro one sgie.

As more than 4 sub-batches in reltime application is giving the error

gstnvtracker: ConvBuf status is not available for active batch!
gstnvtracker: ConvBuf status is not available for active batch!
gstnvtracker: completed req is not in active batch!
gstnvtracker: completed req is not in active batch!

We have refer to this code “gst-plugins/gst-nvtracker/nvtracker_proc.cpp”
Pls look into this

We added an illustration of the sub-batching examples in the nvtracker_proc.cpp on DS 7.0. You can refer to that.

We don’t have a performance metric at the moment. This parameter should be configured according to the GPU loading of your device.
Could you attach your video source with 100 objects and the specifies your GPU model? We can try that on our side.
You can also monitor your GPU loading and adapt the appropriate value based on your GPU loading.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.