Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) GPU • DeepStream Version DS 6.2 • JetPack Version (valid for Jetson only) • TensorRT Version • NVIDIA GPU Driver Version (valid for GPU only) 525.85.12 • Issue Type( questions, new requirements, bugs) bugs • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
Hello,
I am running triton-server on localhost and ai_service container has host network access. I am getting below error
[libprotobuf ERROR /tmp/grpc/third_party/protobuf/src/google/protobuf/text_format.cc:321] Error parsing text-format nvdsinferserver.config.InferenceConfig: 17:33: Message type “nvdsinferserver.config.TritonGrpcParams” has no field named “enable_cuda_buffer_sharing”.
I want to have cuda buffer sharing enable, can you please suggest how to use it?
I have downgraded to NGC Container 22.09 for dGPU on x86 as per documentation recommendation but still seeing same issue.
[libprotobuf ERROR /tmp/grpc/third_party/protobuf/src/google/protobuf/text_format.cc:321] Error parsing text-format nvdsinferserver.config.InferenceConfig: 17:33: Message type “nvdsinferserver.config.TritonGrpcParams” has no field named “enable_cuda_buffer_sharing”.
[generic_gstreamer.py:99:run_pipeline:20230615T12:53:00:INFO] Starting pipeline