I am using multiple files and trying to infer using nvinferserver using triton over grpc.
Since the batch-size is greater than 1, the NvDsInferLayerInfo buffer for stream index 0 has garbage values at positions 0 and 1, whereas for stream 1 it has the tensor returned at positions 0 and 1.
When batch-size == 1, the inference is working as expected.
when using batch-size > 1 with nvinferserver on grpc, the pointer for stream id 0 is pointing to a garbage value.
tensor_meta = pyds.NvDsInferTensorMeta.cast(um_frame_meta.user_meta_data)
I also tried another scenario where I took same files as 2 filesrc, and set
nvstreammux - batch-size = 2, batched-push-timeout=100 msec
nvinferserver, on grpc url, interval=0, batch-size=2
Then it returned classifier meta only for frames batch_id ==1, no classifier meta for batch_id == 0
Please help, am I missing something?
Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) • DeepStream Version • JetPack Version (valid for Jetson only) • TensorRT Version • NVIDIA GPU Driver Version (valid for GPU only) • Issue Type( questions, new requirements, bugs) • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
Hardware Platform (Jetson / GPU) : T4 • DeepStream Version : 6.0.1-triton • NVIDIA GPU Driver Version (valid for GPU only) - 510.47.03 • Issue Type( questions, new requirements, bugs) - BUG • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) -
Using this pipeline
gst-launch-1.0
filesrc location= left.h264 ! h264parse ! nvv4l2decoder name=c102
filesrc location= right.h264 ! h264parse ! nvv4l2decoder name=c104
c102. ! m.sink_0 nvstreammux name=m batch-size=2 width=1920 height=1080
c104. ! m.sink_1
m. ! nvinferserver config-file-path=config.txt ! fakesink
The returned classifier meta only for frames batch_id ==1, no classifier meta for batch_id == 0
It also returns garbage value in confidence for some frames.
There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks