Docker container runs alone but not in pipeline


I have a docker container based on nvidia/pytorch image and I can run it fine and get the results, however when I put it as part of a clara pipeline I get an error regarding shared memory. I could not find any way to set the shm using clara, is there a way to solve this issue?


First, thank you for testing out Clara Deploy, and sorry for this late response.

In Clara Deploy R3.0, the operators in the pipeline would be launched in its own pod, and shared memory would not be available across pod by the limitation of Kubernetes. So if there are two or more Docker containers, which normally can use shared when launched through Docker Compose or scripts, once they are used in the Clara pipeline operators, shared memory will not work.

This is a known limitation and currently is being addressed in the Clara Deploy, so that shared memory works across pods.

If this does not explain the failure, please provide additional information for us to further analyze.

Best regards,
Ming Q