How can custom data be sent and received when using the Python backend in Triton?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
NVIDIA RTX A4500
• DeepStream Version
7.1
• JetPack Version (valid for Jetson only)
• TensorRT Version
default tensorrt in nvcr.io/nvidia/deepstream:7.1-triton-multiarch
• NVIDIA GPU Driver Version (valid for GPU only)
Driver Version: 535.161.08
CUDA Version: 12.2
• Issue Type( questions, new requirements, bugs)
questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
hello,
I am using the Python backend in Triton Server, and after processing, I return custom JSON data. Now, I need to retrieve the JSON data in the Probe. Is this approach feasible, or do I need to modify the nvinferserver source code to accept custom results? or can I modify the source code of custom_lib to accept custom data?
python backend code:

data = {
“video_has_board”: “ok”,
“video_distance”: self.video_distance,
“video_angle_ok”: self.video_angle_ok,
“focal_length”: self.focal_length,
“normal_vector”: self.normal_vector.tolist(),
“D”: self.D.tolist()
}
result = json.dumps(data, ensure_ascii=False, separators=(‘,’,‘:’))
out = np.array(result)
out_tensor = pb_utils.Tensor(‘results’, out.astype(output_dtype))
inference_response = pb_utils.InferenceResponse(output_tensors=[out_tensor])
responses.append(inference_response)
return responses

How can the custom responses mentioned above be retrieved through the nvinferserver component or other methods?

In terms of python backend, please refer to this faq.

OK,Thanks

Sorry for the late reply, Is this still an DeepStream issue to support? Thanks!