Please provide complete information as applicable to your setup. • Hardware Platform : GPU • DeepStream Version : 6.1.1 • TensorRT Version : 8.4.1.5 • NVIDIA GPU Driver Version (valid for GPU only) : 525.60.13 • Issue Type : questions, new requirements, bugs
Thanks for the response @fanzh . I did see that but from the instructions in that I was unable to setup the people net model with triton following the instructions in that readme.
I0202 06:56:34.991804 135 grpc_server.cc:4587] Started GRPCInferenceService at 0.0.0.0:8001
I0202 06:56:34.992076 135 http_server.cc:3303] Started HTTPService at 0.0.0.0:8000
I0202 06:56:35.034268 135 http_server.cc:178] Started Metrics Service at 0.0.0.0:8002
from your logs, tritonserver started successfully, and peoplenet_tao model was loaded, so server side is ready. you need to set model_name: “peoplenet_tao” in config_triton_grpc_infer_primary_peoplenet.txt because loaded model name is not peoplenet, then start the command according to the readme.
ok @fanzh so i did ran python3 deepstream_test_3.py -i file:///home/testA.mp4 --pgie nvinferserver-grpc -c config_triton_grpc_infer_primary_peoplenet.txtafter making the change model_name: “peoplenet_tao”
Unable to create pgie : nvinferserver-grpc
Creating tiler
Creating nvvidconv
Creating nvosd
Creating EGLSink
Traceback (most recent call last):
File “deepstream_test_3.py”, line 484, in
sys.exit(main(stream_paths, pgie, config, disable_probe))
File “deepstream_test_3.py”, line 317, in main
pgie.set_property(‘config-file-path’, config)
AttributeError: ‘NoneType’ object has no attribute ‘set_property’
as the logs shown, the application created nvinfersever element faild, are you using deepstream triton docker? dose “gst-inspect-1.0 nvinferserver” output “No such element or plugin” ?
are you using deepstream devel docker?
deepstream will leverage triton libs to communicate with triton server, we suggest to use deepstream triton docker because you need not to compile triton libs. here is the link: nvcr.io/nvidia/deepstream:6.1.1-triton, please refer to DeepStream | NVIDIA NGC
So the command python3 deepstream_test_3.py -i file:///home/testA.mp4 --pgie nvinferserver-grpc -c config_triton_grpc_infer_primary_peoplenet.txt after making the change model_name: “peoplenet_tao” should also not need any container right?
yes, this way will use nvinfer to do inference, it will not use triton, it can run in or outside docker.
this method will use nvinferserver to do inference, it will use triton, both client and server need to run in deeptream triton docker because both will use triton libs.
ok that makes sense. I will follow that but one quick question @fanzh
I can have two separate docker containers right?
One container for server that is already running and another one for client where I will be running python3 deepstream_test_3.py -i file:///home/testA.mp4 --pgie nvinferserver-grpc -c config_triton_grpc_infer_primary_peoplenet.txt after making the change model_name: “peoplenet_tao”
It should be possible for the two to communicate with each other if I map the ports correctly between host and the 2 containers