Deepstream app with inference sever

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
NVIDIA GeForce 1080ti
• DeepStream Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)

I tried to use ensemble backend in deepstream and the pipeline is torchscript-onnx-torchscript. The error occurs:
deepstream-app: infer_trtis_server.cpp:419: NvDsInferStatus nvdsinferserver::TrtServerResponse::parseOutputData(const nvdsinferserver::TrtServerRequest*): Assertion `bufDesc.memType == InferMemType::kCpu || bufDesc.memType == InferMemType::kCpuCuda’ failed.
But when i use torchscript-onnx-onnx, it works. Is there any bug in infer_trtis_server.cpp?

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.

Sorry! Could you share how to reproduce this?