Description
My team has found a bug in deepstream 7.1. We were using an rtdetr model with nvinfer and outputting the raw tensor metadata so we could handle the parsing code ourselves downstream. our model has an output layer with the datatype of int64. We noticed support for int64 was added in deepstream 7.1. the enum NvDsInferDataType
found in the file sources/includes/nvdsinfer.h
has int64 as an option.
typedef enum
{
/** Specifies FP32 format. */
FLOAT = 0,
/** Specifies FP16 format. */
HALF = 1,
/** Specifies INT8 format. */
INT8 = 2,
/** Specifies INT32 format. */
INT32 = 3,
/** Specifies INT64 format. */
INT64 = 4
} NvDsInferDataType;
the issue comes in when enabling output-tensor-meta
. the function get_element_size
in the file sources/gst-plugins/gst-nvinfer/gstnvinfer_meta_utils.cpp
does not have a case in the switch statement to return the correct number of bytes for the type int64.
static inline int
get_element_size (NvDsInferDataType data_type)
{
switch (data_type) {
case FLOAT:
return 4;
case HALF:
return 2;
case INT32:
return 4;
case INT8:
return 1;
default:
return 0;
}
}
this causes the below code found in the function attach_tensor_output_meta to mis behave.
for (unsigned int i = 0; i < meta->num_output_layers; i++) {
NvDsInferLayerInfo & info = meta->output_layers_info[i];
meta->out_buf_ptrs_dev[i] =
(uint8_t *) batch_output->outputDeviceBuffers[i] +
info.inferDims.numElements * get_element_size (info.dataType) * j;
meta->out_buf_ptrs_host[i] =
(uint8_t *) batch_output->hostBuffers[info.bindingIndex] +
info.inferDims.numElements * get_element_size (info.dataType) * j;
}
in the code j is the frame index, and because get_element_size always returns 0, the pointers in out_buf_ptrs_dev
and out_buf_ptrs_host
are always accessing the 0 index of the batch_output->hostBuffers
and batch_output->outputDeviceBuffers
. this causes the layer info to be overridden by subsequent frame in the batch.
Is this something that could be patched in an upcoming release?