Send empty arrays from triton using nvinferserver

• Hardware Platform (Jetson / GPU): GPU: RTX Titan or RTX 2080
• DeepStream Version: 6.0
• NVIDIA GPU Driver Version (valid for GPU only): 465.31
• Issue Type( questions, new requirements, bugs): Question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing): Send an empty array such as np.ones((4, 0))

I have a triton model for object detection, some times I need to send empty tensors (in case no objects were detected) but when I do, I get the following errors.

/opt/tritonserver/backends/python/startup.py:308: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe. 
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
  if output_np_array.dtype == np.object or output_np_array.dtype.type is np.bytes_:
pytorch-test-app: infer_cuda_utils.cpp:91: nvdsinferserver::CudaTensorBuf::CudaTensorBuf(const nvdsinferserver::InferDims&, nvdsinferserver::InferDataType, int, const string&, nvdsinferserver::InferMemType, int, bool): Assertion `bufBytes > 0' failed.

Usually the numpy tensors has the shape of [1,0,6].

I have tried sending empty arrays or just not including the output in the inferenceResponse object. In both cases I am not able to send results. The only way I can use is send invalid result and check if it is valid or not. What is the best way of sending empty tensors in this case?

what do you mean empty tensor? with all data are zero or no any data but only shape number [1,0,6]?

When the shape is something like Nx0 or any other shape with 0 in one dimension.

Ok, looks DS nvinferserver/triton does not support empty tensor.
will double check and get back to you next week.

1 Like

Hi @1993mwc ,
Is it possible to share us a repo about this?
We want to check if we can support this in future release.

Sure, I will share a quick one. I will get back once I have it