Model Inference Issue : some layers missing or unsupported data types in output tensors

Trying to infer model using the triton inference server after model loading facing this issue:::::::::::::::::::::::::::::::::::::::::::::::::

ERROR: some layers missing or unsupported data types in output tensors
ERROR: infer_postprocess.cpp:1023 Failed to parse bboxes using custom parse function
ERROR: infer_postprocess.cpp:362 detection parsing output tensor data failed, uid:1, nvinfer error:NVDSINFER_CUSTOM_LIB_FAILED
ERROR: infer_postprocess.cpp:251 Infer context initialize inference info failed, nvinfer error:NVDSINFER_CUSTOM_LIB_FAILED
0:00:20.520231727 70 0x7f6278004f90 WARN nvinferserver gstnvinferserver.cpp:519:gst_nvinfer_server_push_buffer:<primary_gie> error: inference failed with unique-id:1

1 Like

Hey, can you provide your setup

Hey, I am using Triton container “deepstream:5.0-20.07-triton” on cloud instance having GPU type - Tesla T4 (AWS g4dn.xlarge) so the configuration of Cuda, Nvidia-drivers,etc. are same as per container.

Seems you need to customize your own post process parser, pls refer nvdsinfer_custombboxparser.cpp

We will look into it and will try to implement it if something got stuck then will reach to you.