Using batch-size > 1 , inferserver on grpc doesnt return metadata for each stream, its mixed and flattened

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) : T4
• DeepStream Version: 6.0.1-triton
• NVIDIA GPU Driver Version (valid for GPU only) - 510.47.03

I am using multiple files and trying to infer using nvinferserver using triton over grpc.
Since the batch-size is greater than 1, the NvDsInferLayerInfo buffer for stream index 0 has garbage values at positions 0 and 1, whereas for stream 1 it has the tensor returned at positions 0 and 1.

When batch-size == 1, the inference is working as expected.

infer_config {
  unique_id: 1
  gpu_ids: 0
  max_batch_size: 30
  backend {
    inputs: [ {
      name: "INPUT"
    }]
    outputs: [
      {name: "OUTPUT"}
    ]
    triton {
      model_name: "wsframe-sd-default-ensemble"
      version: -1
      grpc {
        url: "10.54.18.225:8001"
      }
    }
  }

  preprocess {
    network_format: IMAGE_FORMAT_RGB
    tensor_order: TENSOR_ORDER_NHWC
    tensor_name: "INPUT"
    maintain_aspect_ratio: 0
    frame_scaling_hw: FRAME_SCALING_HW_DEFAULT
    frame_scaling_filter: 1
    normalize { scale_factor: 1.0 }
  }

  postprocess {
    other {}
  }

  extra {
    copy_input_to_host_buffers: false
    output_buffer_pool_size: 6
  }
}
input_control {
  process_mode: PROCESS_MODE_FULL_FRAME
  operate_on_gie_id: -1
  interval: 2
}

output_control {
  output_tensor_meta: true
}

When batch-size > 1, the tensor returned doesn’t start from index 0 for stream 0 in NvDsInferLayerInfo buffer, for at index 0 and 1 there is 0,

When batch-size == 1, the tensor returned starts from index 0 in NvDsInferLayerInfo buffer, which is the desired behaviour.

My output tensor from triton is of shape [-1, 2]

when using batch-size > 1 with nvinferserver on grpc, the pointer for stream id 0 is pointing to a garbage value.
tensor_meta = pyds.NvDsInferTensorMeta.cast(um_frame_meta.user_meta_data)

        layer = pyds.get_nvds_LayerInfo(tensor_meta, 0)

        ptr = ctypes.cast(pyds.get_ptr(layer.buffer), ctypes.POINTER(ctypes.c_float))
        v = np.ctypeslib.as_array(ptr, shape=(2,))

for stream id 0 its pointing to garbage value, Any help will be highly appreciated.

Which batch-size do you mean?

I am using nvinferserver with remote grpc URL,

it is skipping inference on first 2 frames in a batch and providing inference on the rest, I am using interval =0

nvstreammux - batch-size = 10, batched-push-timeout=100 msec
nvinferserver, on grpc url, interval=0, batch-size=30

I am using 10 filesrc, the result skips first two frames in a batch, the inference is returned on rest of the frames in a batch.

I also tried another scenario where I took same files as 2 filesrc, and set
nvstreammux - batch-size = 2, batched-push-timeout=100 msec
nvinferserver, on grpc url, interval=0, batch-size=2

Then it returned classifier meta only for frames batch_id ==1, no classifier meta for batch_id == 0
Please help, am I missing something?

They won’t help you, I have waited for them valuable response for 1 month.

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hardware Platform (Jetson / GPU) : T4
• DeepStream Version : 6.0.1-triton
• NVIDIA GPU Driver Version (valid for GPU only) - 510.47.03
• Issue Type( questions, new requirements, bugs) - BUG
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) -
Using this pipeline
gst-launch-1.0
filesrc location= left.h264 ! h264parse ! nvv4l2decoder name=c102
filesrc location= right.h264 ! h264parse ! nvv4l2decoder name=c104
c102. ! m.sink_0 nvstreammux name=m batch-size=2 width=1920 height=1080
c104. ! m.sink_1
m. ! nvinferserver config-file-path=config.txt ! fakesink

The config.txt is posted here Using batch-size > 1 , inferserver on grpc doesnt return metadata for each stream, its mixed and flattened

batch-size is 2 for nvstreammux.

The returned classifier meta only for frames batch_id ==1, no classifier meta for batch_id == 0
It also returns garbage value in confidence for some frames.

1 Like

Yes, I also got corrupt images in frame from inference.

Sorry for late response, can you share you model and config files for us to reproduce your problem?

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.