How can I specify complex inference results?

In the Python API referencehttps://docs.nvidia.com/deeplearning/sdk/inference-server-archived/tensorrt_inference_server_0110_beta/tensorrt-inference-server-guide/docs/python_api.html#module-tensorrtserver.api, It looks that the inference ResultFormat can only be numpy array or an array of (index, value, label) tuples, but my model return an complex results comprise of a dict of many key_value pairs, looks like below:

{“response”: {“status”: 200, “service”: “http://192.168.15.1:5509/lungDAC/”, “time”: 1558688746822}, “result”: {“dicoms”: [{“infos”: [{“subtlety”: 0.9999930932086258, “Malignancy”: 2, “Texture”: 2, “sopInstanceUid”: “1.3.6.1.4.1.14519.5.2.1.6279.6001.824843590991776411530080688091”, “probablity”: 0.9999930932086258, “coordinate”: {“y1”: 357, “y2”: 357, “x2”: 326, “x3”: 326, “y3”: 377, “x1”: 306, “y4”: 377, “x4”: 306}, “Calcification”: 1}], “sopInstanceUid”: “1.3.6.1.4.1.14519.5.2.1.6279.6001.824843590991776411530080688091”}], “id”: “88888”}}

How can I specify a complex inference resultformat?

What is the data-type of your output tensor for your model? That looks like json so it is output as a STRING?
BTW, you will likely find more of a community of inference server users on the GitHub issues site: https://github.com/NVIDIA/tensorrt-inference-server/issues

Thanks for your reply.

In fact, jason is the response information of our http deep learning server.
Our deep learning model is a Faster_RCNN like object detection model,we postprocess the output of the model, then pack the result into jason and send to the client.

Previously I’m wondering how TRT inference server can return a json result, Maybe I approach TRT inference server in the wrong way.

I’m considering write a server programe to request the TRT inference server and do the postprocess, deploy the server programe and the TRT inference server in the same host(Our data is CT scan, about 100 MB each scan ), then the client request the server programe instead of the TRT inference server.

What you describe in the last paragraph sounds like the right approach. Your service will need to perform pre and post processing and then communicate with TRTIS to perform the actual inference.