Triton server configuration instance group

can we configure one GPU and CPU instance for an particular model as below ?

instance_group [{
count: 1
gpus: 0
kind: KIND_GPU
},
{
count: 2
kind: KIND_CPU
}]

because i am getting an error

E0322 06:45:55.883454 73 model_repository_manager.cc:1215] failed to load 'Vehicle_model' version 1: Invalid argument: instance group Vehicle_model_0 of model Vehicle_model must be KIND_GPU and must specify at least one GPU id

Please provide the setup info as below

• Hardware Platform (Jetson / GPU): GPU
• DeepStream Version: 6.0
• TensorRT Version: 8.0.1
• NVIDIA GPU Driver Version (valid for GPU only): 495.29.05
• Issue Type( questions, new requirements, bugs): questions, bugs

• Hardware Platform (Jetson / GPU): Tesla T4
• DeepStream Version: 6.0
• TensorRT Version: 8.0.1
root@6b55a2214e5a:/opt/nvidia/deepstream/deepstream-6.0/sources/deepstream-apps/client/src/python/examples# nvidia-smi
Tue Mar 22 13:22:01 2022
±----------------------------------------------------------------------------+
| NVIDIA-SMI 510.47.03 Driver Version: 510.47.03 CUDA Version: 11.6 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla T4 On | 00000000:00:1E.0 Off | 0 |
| N/A 41C P0 26W / 70W | 13495MiB / 15360MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+

±----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 6150 C 13445MiB |
±----------------------------------------------------------------------------+

DS-triton has the support for triton’s GPU+CPU multi-instance running together on same models. But not all the triton models/backends can run in GPU mode or CPU mode. for example, tensorrt models doesn’t support CPU mode. some of tensorflow models are frozen into GPU mode only. Some backends only support CPU data process, Need to check whether the model has CPU support. Triton’s multi-instance doc: server/model_configuration.md at main · triton-inference-server/server · GitHub

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.