[error] when DeepsTream`s container using Triton Inference Server through gRPC,Segmentation fault (core dumped)

Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson / GPU) T4
**• DeepStream Version 6.0
• JetPack Version (valid for Jetson only)
• TensorRT Version
**• NVIDIA GPU Driver Version (valid for GPU only) 440.33.01
**• Issue Type( questions, new requirements, bugs) bugs
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hi, I’m using Python API.When I run the program in DeepStream’s Triton Inference Server container.Its ok.But when I run it in DeepsTreams container using Triton Inference Server through gRPC, I met this error.The Triton Inference Server version is 21.08, the same as DeepStream’s Triton Inference Server container.
the error PRINT is:

Starting pipeline 

INFO: infer_grpc_backend.cpp:164 TritonGrpcBackend id:1 initialized for model: yolov5onnx
Segmentation fault (core dumped)

It is the triton model folder:
triton_model.zip (54.1 MB)

this is the code:
deepstream_yolov5.py (11.3 KB)

this is the config file:
dsyolov5_nopostprocess_grpc.txt (663 Bytes)

this is the label file:
labels.txt (624 Bytes)

Sorry for the late response, have you managed to get issue resolved or still need the support? Thanks

I still need the support.It still shows the problem.
I don’t know what’s wrong and how to avoid it next time.

Sorry!
DeepStream’s Triton Inference Server container is “nvcr.io/nvidia/deepstream:6.0-triton” right?
What is “DeepsTream` s container using Triton Inference Server”?

yes, DeepStream’s Triton Inference Server container is “nvcr.io/nvidia/deepstream:6.0-triton

can you check both questions?

DeepsTream` s container using Triton Inference Server refer to Gst-nvinferserver — DeepStream 6.1.1 Release documentation.
DeepStream container is nvcr.io/nvidia/deepstream:6.0-triton.
Triton Inference Server is nvcr.io/nvidia/tritonserver:21.08-py3

nvcr.io/nvidia/deepstream:6.0-triton already supports gRPC, can you just use nvcr.io/nvidia/deepstream:6.0-triton instead of Triton server docker?

What I need is one machine install deepstream used to video decode and another machine install triton used to inference

In addition to supporting native inference, DeepStream applications can communicate with independent/remote instances of Triton Inference Server using gRPC, allowing the implementation of distributed inference solutions.

Here is A machine installed deepstream which is used to video decode,B machine installed triton which is used to inference.A machine communicate with B machine using gRPC.
How to use nvcr.io/nvidia/deepstream:6.0-triton instead of Triton server docker in B machine?

README (6.9 KB)
samples/configs/deepstream-app-triton-grpc/README
I think you should read it carefully.