Deepstream triton server config_infer.txt file

m using docker image triton-server-20.02 , how do i change the connection from grpc to http

• DeepStream Version 6.1
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.01
ubuntu@ip-172-31-11-102:~$ nvidia-smi
Fri Apr 1 04:03:32 2022
±----------------------------------------------------------------------------+
| NVIDIA-SMI 510.47.03 Driver Version: 510.47.03 CUDA Version: 11.6 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla T4 On | 00000000:00:1E.0 Off | 0 |
| N/A 28C P0 25W / 70W | 13531MiB / 15360MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+

±----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 6150 C tritonserver 13445MiB |

below is my grpc config connection


triton {
      model_name: "Primary_Detector"
      version: -1
      **grpc {**
**        url: "localhost:8001"**
**      }**
    }

how do i change to HTTP ???

Hi @h9945394143 ,
As showed in Triton client sample, it can be specified in Triton client - client/image_client.cc at main · triton-inference-server/client · GitHub

im using deepstream application . How can i update the config_infer.txt file from GRPC to HTTP

triton {
      model_name: "Primary_Detector"
      version: -1
      **grpc {**
**        url: "localhost:8001"**
**      }**
    }

are you using nvinferserer plugin?

i was able to fix it . thanks

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.