TensorRT InferenceServer Crash with multiple clients

Hi,

I am testing TensorRT inference server with multiple clients (5).
When I use more than 4 clients the inference server crash sometimes…

E0307 11:43:41.125616259      39 sync_posix.cc:47]           assertion failed: pthread_mutex_lock(mu) == 0

Has anyone run into same error?
Thank you!

Hello,

seems like a resource/semaphore race condition. To help us debug, can you share a small repro that contains the model and clients that demonstrate the error you are seeing?

Hi,

Yes I can. Where I can upload the model and client (python file)?
I am use: docker tensorrtserver:19.01-py3
I build the InferenceClient: from 19.01 branch

via dropbox or google drive, or https://devtalk.nvidia.com/default/topic/1043356/tensorrt/attaching-files-to-forum-topics-posts/

I am sending model and python code.
Steps:
Start server.

Start 5 new docker images with clienttensorrt (InferenceClient: from 19.01 branch)

In each docker run command:

for _ in {1..100}; do python  grpc_image_client.py -u "YOUR_SERVER_IP":8001 -m test_netdef ./"PATH_TO_TEST_IMAGE"/check.png; done

model_code.zip (2.3 MB)

Hello,

can you try with 19.03 container? which is our first GA/non-beta TRTIS release.