Please provide complete information as applicable to your setup.
**• Hardware Platform (Jetson / GPU) T4
**• DeepStream Version 6.0
• JetPack Version (valid for Jetson only)
• TensorRT Version
**• NVIDIA GPU Driver Version (valid for GPU only) 440.33.01
**• Issue Type( questions, new requirements, bugs) bugs
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
Hi, I’m using Python API.When I run the program in DeepStream’s Triton Inference Server container.It
s ok.But when I run it in DeepsTreams container using Triton Inference Server through gRPC, I met this error.The Triton Inference Server version is 21.08, the same as DeepStream’s Triton Inference Server container.
the error PRINT is:
INFO: infer_grpc_backend.cpp:164 TritonGrpcBackend id:1 initialized for model: yolov5onnx
Segmentation fault (core dumped)
It is the triton model folder：
triton_model.zip (54.1 MB)
this is the code:
deepstream_yolov5.py (11.3 KB)
this is the config file:
dsyolov5_nopostprocess_grpc.txt (663 Bytes)
this is the label file:
labels.txt (624 Bytes)
Sorry for the late response, have you managed to get issue resolved or still need the support? Thanks
I still need the support.It still shows the problem.
I don’t know what’s wrong and how to avoid it next time.
DeepStream’s Triton Inference Server container is “nvcr.io/nvidia/deepstream:6.0-triton” right?
What is “DeepsTream` s container using Triton Inference Server”?
yes, DeepStream’s Triton Inference Server container is “nvcr.io/nvidia/deepstream:6.0-triton”
can you check both questions?
nvcr.io/nvidia/deepstream:6.0-triton already supports gRPC, can you just use nvcr.io/nvidia/deepstream:6.0-triton instead of Triton server docker?
What I need is one machine install deepstream used to video decode and another machine install triton used to inference
In addition to supporting native inference, DeepStream applications can communicate with independent/remote instances of Triton Inference Server using gRPC, allowing the implementation of distributed inference solutions.
Here is A machine installed deepstream which is used to video decode,B machine installed triton which is used to inference.A machine communicate with B machine using gRPC.
How to use nvcr.io/nvidia/deepstream:6.0-triton instead of Triton server docker in B machine?
README (6.9 KB)
I think you should read it carefully.