Hi. I’m trying to serve TensorRT Mask RCNN model with Triton Inference server. We have this line in the sampleuffmaskrcnn file:
hostDataBuffer[i * volImg + c * volChl + j] = float(mPPMs[i].buffer[j * inputC + c]) - pixelMean[c];
This is the float data that I need to append to Triton input pointer… how can I effectively do this? I have tried to type cast float to uint8 in different methods and even though the file builds without errors I get this on running it:
[libprotobuf ERROR /workspace/build/grpc-repo/src/grpc/third_party/protobuf/src/google/protobuf/wire_format_lite.cc:603] String field ‘nvidia.inferenceserver.ModelInferResponse.InferOutputTensor.ParametersEntry.key’ contains invalid UTF-8 data when parsing a protocol buffer. Use the ‘bytes’ type if you intend to send raw bytes.
Any help regarding this would be really appreciated!