tensorrtserver.api.InferenceServerException - Liver Segmentation Pipeline

Hi,

I tried “Liver Segmentation Pipeline”.
( https://docs.nvidia.com/clara/deploy/sdk/Applications/Pipelines/LiverTumorPipeline/public/docs/README.html )
But, Argo workflow LOGS stops at tensorrtserver.api.InferenceServerException.

Could anyone guide me what should I do?

P.S.
Error detail.

ERROR:medical.tlt2.src.components.inferers.trtis_utils.InferenceEngine:Failed to get server status
Traceback (most recent call last):
File “components/inferers/trtis_utils.py”, line 69, in init
File “/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.5/site-packages/tensorrtserver/api/init.py”, line
471, in get_server_status
self._ctx, byref(cstatus), byref(cstatus_len))))
File “/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.5/site-packages/tensorrtserver/api/init.py”, line
182, in _raise_if_error
raise ex
tensorrtserver.api.InferenceServerException: [ 0] HTTP client failed: Couldn’t connect to server
Traceback (most recent call last):
File “app_base_inference/main.py”, line 61, in
trtis_uri=trtis_uri)
File “/app/app_base_inference/app.py”, line 85, in init
self.setup()
File “/app/app_base_inference/app.py”, line 166, in setup
self.inferer = self.build_component(inferer_config)
File “/app/app_base_inference/app.py”, line 387, in build_component
return WorkFlowFactory.build_component(config_dict)
File “workflows/workflow_factory.py”, line 174, in build_component
File “utils/class_utils.py”, line 33, in instantiate_class
File “components/inferers/trtis_sw_inferer.py”, line 32, in init
File “components/inferers/trtis_inferer.py”, line 9, in init
File “components/inferers/trtis_utils.py”, line 69, in init
File “/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.5/site-packages/tensorrtserver/api/init.py”, line
471, in get_server_status
self._ctx, byref(cstatus), byref(cstatus_len))))
File “/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.5/site-packages/tensorrtserver/api/init.py”, line
182, in _raise_if_error
raise ex
tensorrtserver.api.InferenceServerException: [ 0] HTTP client failed: Couldn’t connect to server

Hi yun_yuniv
Sorry for the late reply.

It looks like network-related problem (may be proxy issues).
Could you take a look at the comments in the following links?

https://devtalk.nvidia.com/default/topic/1069612/clara-deploy-sdk-new-/pipeline-cannot-connect-to-tensorrt-server/post/5420268/#5420268
https://devtalk.nvidia.com/default/topic/1050737/clara-deploy-sdk-new-/issue-pulling-clara/post/5420274/#5420274

And, you can download/install the latest version (0.4.1, 12/10/2019) from the following links:

https://ngc.nvidia.com/catalog/model-scripts/nvidia:clara_deploy_sdk/setup
https://docs.nvidia.com/clara/deploy/ClaraInstallation.html