Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) : NVIDIA GeForce RTX 3090
• DeepStream Version : 6.3
• JetPack Version (valid for Jetson only)
• TensorRT Version : 12.2
• NVIDIA GPU Driver Version (valid for GPU only) : 535.104.05
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
for deepstream-test.py this pipeline example uses peoplenet object detection with deepstream and triton and this example work successfully but when I tried to check triton server is running using below commands
Command1: ps aux | grep tritonserver
Output
root 1284 0.0 0.0 3304 720 pts/3 S+ 12:43 0:00 grep --color=auto tritonserver
Command2: curl -Is http://localhost:8000/v2/health/live
Output
Empty
Command3: netstat -tuln | grep 8000
Output
Empty
Command3: tritonserver
I1007 12:43:40.197500 1289 metrics.cc:747] Collecting metrics for GPU 0: NVIDIA GeForce RTX 3090
I1007 12:43:40.198636 1289 metrics.cc:640] Collecting CPU metrics
I1007 12:43:40.198792 1289 tritonserver.cc:2364]
±---------------------------------±---------------------------------------------------------------------------------------------------------------------------------+
| Option | Value |
±---------------------------------±---------------------------------------------------------------------------------------------------------------------------------+
| server_id | triton |
| server_version | 2.32.0 |
| server_extensions | classification sequence model_repository model_repository(unload_dependents) schedule_policy model_configuration system_shared_m |
| | emory cuda_shared_memory binary_tensor_data parameters statistics trace logging |
| model_control_mode | MODE_NONE |
| strict_model_config | 0 |
| rate_limit | OFF |
| pinned_memory_pool_byte_size | 268435456 |
| min_supported_compute_capability | 6.0 |
| strict_readiness | 1 |
| exit_timeout | 30 |
| cache_enabled | 0 |
±---------------------------------±---------------------------------------------------------------------------------------------------------------------------------+
I1007 12:43:40.199734 1289 server.cc:281] No server context available. Exiting immediately.
error: creating server: Invalid argument - --model-repository must be specified
does triton server work on different port ?
I see that triton server is not up so how can I make sure that deepstream use triton as backend for inference
Appreciate your feedback