Reference liver pipeline works for example data, memory issue when I try to use my own

I have installed Clara Deploy on Ubuntu and have run it successfully on the example dataset. When I try to run it on my own dicom series, even though the two look similar, the dicom loader runs but it hangs on the segmentation, seemingly as the memory on the machine is used up, at which point the system is completely bogged down. After restarting, there is the following error in the log for this job:

container liver-tumor-segmentation is not valid for pod liver-seg-0391809-20160630-w55nc-458062943",“reason”:“BadRequest”,“code”:400

the machine was recently updated to 32 Gb. Any thoughts as to what could be the problem?

Thank you for your interest in Clara Deploy and sorry for the troubles. Would you please shed some light on the Hardware (CPU, GPU, etc) you’re using to run Clara Deploy and also the version of Clara Deploy used?

Machine is
OS is Ubuntu 18.04.4 LTS Linux-x86_64
CPU is Intel Core i7-4790
Memory is 31.3 GiB
GPU is TITAN-RTX

Clara Deploy Vernon 0.6.0-11245

Does it have something to do with disabling the swap space maybe?

Thanks for the update; nothing jumps out necessarily especially when you were able to run the pipeline with the packaged dataset just fine. I wonder if you run any other datasets if the pipeline goes through just fine? the best way to troubleshoot this would be to investigate the various Clara logs; the platform API for example to rule out any issue with the platform. In addition, the DICOM adaptor logs as well. It is possible that there’s an issue with the dataset as well, for example of there are any special characters in the patient name. I hope this provides some pointers and ideas to diagnose the root cause.