Nvinferserver does not support string output model?

• Hardware Platform (Jetson / GPU) : RTX 2080
• DeepStream Version : 6.0.1
• JetPack Version (valid for Jetson only) : None
• TensorRT Version : 8.0.1
• NVIDIA GPU Driver Version (valid for GPU only) : 495.29.05
• Issue Type( questions, new requirements, bugs) : questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) : None
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description) : None


I am testing nvinferesrver+ONNXRuntime-backend+IInferCustomProcessor with a ONNX model which outputs string tensor.

I got the folloing error and my app fails.

I0804 01:57:34.410246 24 model_repository_manager.cc:1212] successfully loaded 'ModelName' version 1
ERROR: infer_cuda_utils.cpp:155 create cuda tensor buf fail since kString is not supported.
0:00:12.072924094    24 0x562bdb26ac70 ERROR          nvinferserver gstnvinferserver.cpp:362:gst_nvinfer_server_logger:<primary_gie> nvinferserver[UID 1]: Error in addHostTensorPool() <infer_cuda_context.cpp:479> [UID = 1]: failed to create cpu tensor:modelOutputLayerName while adding tensor pool
0:00:12.072958885    24 0x562bdb26ac70 ERROR          nvinferserver gstnvinferserver.cpp:362:gst_nvinfer_server_logger:<primary_gie> nvinferserver[UID 1]: Error in allocateResource() <infer_cuda_context.cpp:538> [UID = 1]: failed to allocate resource for postprocessor., nvinfer error:NVDSINFER_RESOURCE_ERROR
0:00:12.072993559    24 0x562bdb26ac70 ERROR          nvinferserver gstnvinferserver.cpp:362:gst_nvinfer_server_logger:<primary_gie> nvinferserver[UID 1]: Error in initialize() <infer_base_context.cpp:109> [UID = 1]: Failed to allocate buffers

I think nvinferserver should handle string output because enum class nvdsinferserver::InferDataType has kString member.

Does nvinferserver support string output model?

Also note that I set output_mem_type: MEMORY_TYPE_CPU in the setting.
The error message of create cuda tensor buf fail is weird to me since output_mem_type (Triton native output tensor memory type) is CPU and there is no need to allocate cuda tensor to hold model outputs.

Hi, I am waiting for the response (We are using deepstream for products).

Sorry for the late response, our team will do the investigation and provide suggestions soon. Thanks

this is not supported yet although there is enum class nvdsinferserver::InferDataType has kString member.

this errir is because of above ‘kString’ error.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.