Create cuda tensor buf fail since kString is not supported

• Hardware Platform (Jetson / GPU) T4
• DeepStream Version 6.2

1、My tritonserver output is string , and inferserver of deepstream call my tritonserver showing kString is not supported. How to resolve this question?
2、c++ (float*) NvDsInferLayerInfo.buffer[i] output is not matched with python output np.array(result,dtype=np.float32) . How to resolve it ?

  1. currently kString is still not supported in the latest DS6.2. nvinferserver plugin is opensource in DS6.2, you can modify it if needed, and could you share the model? we can look at it at the same time.
  2. NvDsInferLayerInfo’s buffer is void* type, please refer to nvdsinfer.h in deepstream SDK.

result = self.detect_frame(img,parameters)
result_np = np.array([str(result).encode(“utf-8”)], dtype=np.object_)
out_tensor_0 = pb_utils.Tensor(self.output_names[0], result_np)
inference_response = pb_utils.InferenceResponse(output_tensors=[out_tensor_0, ])
return inference_response
Above is the tritonserver code, and can I add the relavent kString code in NvDsInferDataType to achieve the fuction?

yes, nvinferserver plugin is opensource, you can have a try.

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

could you share the model by forum’s private emai? thanks!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.