Triton server logs

where can i find the triton server logs ?
im using docker image triton-server-20.02

in the below thread i see an user had shared the triton server logs . Could you let us know the location of Triton log file

Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)


Hardware Platform (Jetson / GPU)
• DeepStream Version 6.1
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.01
ubuntu@ip-172-31-11-102:~$ nvidia-smi
Fri Apr 1 04:03:32 2022
±----------------------------------------------------------------------------+
| NVIDIA-SMI 510.47.03 Driver Version: 510.47.03 CUDA Version: 11.6 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla T4 On | 00000000:00:1E.0 Off | 0 |
| N/A 28C P0 25W / 70W | 13531MiB / 15360MiB | 0% Default |
| | | N/A |
±------------------------------±---------------------±---------------------+

±----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 6150 C tritonserver 13445MiB |

Please refer to Triton infererence server example ‘simple_grpc_infer_client.py’ - #15 by h9945394143 - DeepStream SDK - NVIDIA Developer Forums

Start triton docker container with appropriate flags to get logs output

can you let me know appropriate flags to be used ?

–log-verbose=3 --log-info=1 --log-warning=1 --log-error=1
“docker run --gpus all --rm --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 -p8000:8000 -p8001:8001 -p8002:8002 -vthis_repo_path:/models nvcr.io/nvidia/tritonserver:21.10-py3 tritonserver --model-store=/models --log-verbose=3 --log-info=1 --log-warning=1 --log-error=1”

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.