Triton infererence server example 'simple_grpc_infer_client.py'

im running through docker container tritonserver.21.01 py3 sdk

could some one tell me the parameters to be passed to run simple_grpc_infer_client.py

also could you let know the best sample usecase python code for inferencing Video

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Where did you get tritonserver:21.01-py3-sdk container. We only have nvcr.io/nvidia/tritonserver:21.02-py3-sdk

sorry its 21.02 only . i was able to find sample examples , Could you help me out with the path of triton server logs .
when-ever i change/unload/relead the models , where can i see the complete logs

As you know, triton is client server architecture, client sends command to server, server does inferrence.

1 triton sdk does not include inference server, it dose not have triton server logs, please refer to triton docker introdcution Triton Inference Server | NVIDIA NGC

2 client need to send messge to server if need infomation, you can call API triton_client.get_inference_statistics to get module infomation, please refer to demo simple_grpc_infer_client.py, and
here is all API introdcution: GitHub - triton-inference-server/client: Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.

i have a deepstream 6.0 triton docker . and i load the models and start the triton server through

tritonserver --model-repository=model-folder-path

and the model get loaded , and can see the logs whenever they is an change in the model as below ,

I0322 05:41:04.478350 73 server.cc:586] 
+----------------------------------+---------+--------+
| Model                            | Version | Status |
+----------------------------------+---------+--------+
| Fire_model                       | 1       | READY  |
| Fire_onnx_model                  | 1       | READY  |
| Helmet_model                     | 1       | READY  |
| IndianVehicle_model              | 1       | READY  |
| PPEKit_ONNX                      | 1       | READY  |
| PPEKit_model                     | 1       | READY  |
| Primary_Detector                 | 1       | READY  |
| Secondary_CarColor               | 1       | READY  |
| Secondary_CarMake                | 1       | READY  |
| Secondary_VehicleTypes           | 1       | READY  |
| Segmentation_Industrial          | 1       | READY  |
| Segmentation_Semantic            | 1       | READY  |
| TripleRiding_model               | 1       | READY  |
| densenet_onnx                    | 1       | READY  |
| inception_graphdef               | 1       | READY  |
| mobilenet_v1                     | 1       | READY  |
| ssd_inception_v2_coco_2018_01_28 | 1       | READY  |
| ssd_mobilenet_v1_coco_2018_01_28 | 1       | READY  |
+----------------------------------+---------+--------+

I0322 05:41:04.478471 73 tritonserver.cc:1718] 
+----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------+
| Option                           | Value                                                                                                                                |
+----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------+
| server_id                        | triton                                                                                                                               |
| server_version                   | 2.13.0                                                                                                                               |
| server_extensions                | classification sequence model_repository model_repository(unload_dependents) schedule_policy model_configuration system_shared_memor |
|                                  | y cuda_shared_memory binary_tensor_data statistics                                                                                   |
| model_repository_path[0]         | /opt/nvidia/deepstream/deepstream-6.0/samples/triton_model_repo/                                                                     |
| model_control_mode               | MODE_POLL                                                                                                                            |
| strict_model_config              | 1                                                                                                                                    |
| pinned_memory_pool_byte_size     | 268435456                                                                                                                            |
| cuda_memory_pool_byte_size{0}    | 67108864                                                                                                                             |
| min_supported_compute_capability | 6.0                                                                                                                                  |
| strict_readiness                 | 1                                                                                                                                    |
| exit_timeout                     | 30                                                                                                                                   |
+----------------------------------+--------------------------------------------------------------------------------------------------------------------------------------+

I0322 05:41:04.820839 73 grpc_server.cc:4111] Started GRPCInferenceService at 0.0.0.0:8001
I0322 05:41:04.832203 73 http_server.cc:2803] Started HTTPService at 0.0.0.0:8000
I0322 05:41:05.005487 73 http_server.cc:162] Started Metrics Service at 0.0.0.0:8002

if i close the terminal , it would be running at the backend , now how it see the logs without restarting tritonserver --model-repository=model-folder-path again

1 can you see the printing above if docker attach the contianer again?
2 if can’t, what is your full docker start command?

when i try to re-attach , i dont have control over tritonserver logs , but the models would be up and running at the backend .

i want to know if any new changes are done , wheather i can see logs . Such as 'model loaded successfully; or ’ failed to load model ’ or 'particular errors ’

1 about " have control over tritonserver logs", do you mean the logs dose not update ?
2 you can use API triton_client.is_model_ready to get model status, please refer to simple_grpc_model_control.py。

how do i view the logs …?? Is there a logfile to view it ?

Also can i get the complete command of ‘triton_client.is_model_ready’

1 no, Triton logs to the console. You can save the logs in a file by redirecting standard output and standard error.

2 please refer to the links below:
protocol

API introduction

is_model_ready usage

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.