"nvdsinferserver.config.TritonGrpcParams" has no field named "enable_cuda_buffer_sharing"

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version DS 6.2
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only) 525.85.12
• Issue Type( questions, new requirements, bugs) bugs
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hello,

I am running triton-server on localhost and ai_service container has host network access. I am getting below error
[libprotobuf ERROR /tmp/grpc/third_party/protobuf/src/google/protobuf/text_format.cc:321] Error parsing text-format nvdsinferserver.config.InferenceConfig: 17:33: Message type “nvdsinferserver.config.TritonGrpcParams” has no field named “enable_cuda_buffer_sharing”.

I want to have cuda buffer sharing enable, can you please suggest how to use it?

infer_config {
unique_id: 6
gpu_ids: 0
max_batch_size: 32

backend {
inputs [
{
name: “INPUT”
dims: [3, 224, 384 ]
data_type: TENSOR_DT_FP16
}
]
triton {
model_name: “ensemble_centerface”
version: -1
grpc {
url: “localhost:8001”
enable_cuda_buffer_sharing:true
}
}
}

preprocess {
network_format: IMAGE_FORMAT_RGB
tensor_order: TENSOR_ORDER_LINEAR
maintain_aspect_ratio: 1
frame_scaling_filter: 3
normalize {
scale_factor: 1
}
}

postprocess {
other{}
}

extra {
custom_process_funcion: “CreateInferServerCustomProcessCenterfaceParser”
}

custom_lib {
path: “/usr/local/lib/libparsing_library.so”
}

}
input_control {
process_mode: PROCESS_MODE_FULL_FRAME
interval: 0
}

How did you enable Triton server? With docker or intall the Triton server by yourself?

I am using triton docker image (nvcr.io/nvidia/tritonserver:23.03-py3)

Plesae follow the compatibility of DeepStream 6.2 nvinferserver: Gst-nvinferserver — DeepStream 6.2 Release documentation

Hello @Fiona.Chen .

I have downgraded to NGC Container 22.09 for dGPU on x86 as per documentation recommendation but still seeing same issue.

[libprotobuf ERROR /tmp/grpc/third_party/protobuf/src/google/protobuf/text_format.cc:321] Error parsing text-format nvdsinferserver.config.InferenceConfig: 17:33: Message type “nvdsinferserver.config.TritonGrpcParams” has no field named “enable_cuda_buffer_sharing”.
[generic_gstreamer.py:99:run_pipeline:20230615T12:53:00:INFO] Starting pipeline

I’ve tried with running triton-server in nvcr.io/nvidia/tritonserver:22.09-py3 and running DeepStream app in nvcr.io/nvidia/deepstream:6.2-triton. No issue is found.

Can you please share docker run command so I can try. I am using docker-compose to start container.

ai-service:
image: ai_service
container_name: ai_service
hostname: ai_service
depends_on: # Start the depends_on first
- couchdb
- rabbitmq3
- restreamer
environment:
- COUCHDB_DB_HOST=172.28.0.4
- COUCHDB_DB_PORT=5984
- COUCHDB_DB_USER=admin
- COUCHDB_DB_PASSWORD=admin986532
- REST_API_SERVER_HOST=http://172.28.0.6:8080
- MESSAGE_BROKER_HOST=172.28.0.3
- MESSAGE_BROKER_PORT=5672
- MESSAGE_BROKER_USERNAME=myuser
- MESSAGE_BROKER_PASSWORD=mypassword
- ARTIFACTORY_HOST=host.docker.internal
- TYCO_AI_SERVER_IP=172.28.0.14
- MILVUS_HOST=20.166.69.85

volumes:
  - /home/einfochips/tycoai/storage:/opt/tycoai/storage
  - /home/azureuser/tycoai-nextgen/ai/acvs-tycoai-ai-service:/app/
ports:
  - 8558:8558
  - 8090:8090
extra_hosts:
  - "host.docker.internal:host-gateway"
networks:
      tycoai-internal:
          ipv4_address: 172.28.0.14
deploy:
  resources:
    reservations:
      devices:
        - driver: nvidia
          count: 1
          capabilities: [gpu]

triton-server:
image: tycoai_triton_server:gpu-full
container_name: triton-server
hostname: triton-server
shm_size: 6g
volumes:
- /etc/localtime:/etc/localtime:ro
- /opt/models:/model_repo
ports:
- 8002:8002
extra_hosts:
- “host.docker.internal:host-gateway”
networks:
tycoai-internal:
ipv4_address: 172.28.0.16
environment:
- DEPLOY_ENV=DEV
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]