Dicom command for running aPipeline not working

Hi,
We have installed Clara Deploy on AWS. We were trying to run a brain tumor segmentation model available (provided by Clara)
We ran the storescp command in one tab of the instance and in another tab of the same instance , we tried to run storescu command with my local IP address ,

storescu -v +sd +r -xb -aet “DCM4CHEE” -aec “BrainSeg” 188.151.207.15 104 ./

But we get the following error TCP Initialization Error and connection time out.

I: determining input files …
I: checking input files …
I: Requesting Association
F: Association Request Failed: 0006:031b Failed to establish association
F: 0006:0317 Peer aborted Association (or never connected)
F: 0006:031c TCP Initialization Error: Connection timed out

image

Could you please help us solve this issue.

Thanks
Kavitha

Thanks for the question Kavitha -

The instructions for deploying a pipeline are here, which it looks like you probably followed. Make sure your IP addresses are set properly in the ~/.clara/charts/dicom-adapter/files/dicom-server-config.yaml file. The IP addresses should be set in this file to your local IP address (both sending and receiving) . Then, in the storescu command you are executing (on your local workstation), the IP address should be set to the AWS instance where Clara is running. In your above command this would mean 188.151.207.15 is the AWS IP. This configuration will ensure the DICOM service running within Clara Deploy on AWS is expecting a push from your local IP, and will also send results there after processing.

Is this how you have it configured right now?

Hi,
The storescu command used my local ip address. We then changed it to AWS IP address and the storescu command worked. But, we are not able to see in the dashboard. We gave our localIP address:8000 as URL.But we were not able to see the dashboard. Kubectl get pods had the following details


There is a CrashLoopBackOff on the trtis-clara-pipesvc .
Can you please let us know the problem and how it has to be fixed.

To be able to see the Argo dashboard, you should be looking on port 8000 on the system that Clara Deploy is running - in this case, that would be AWS-Server-IP:8000.

There are several reasons your trtis pod may be stuck. I see you are deploying the brain segmentation pipeline. During setup for this pipeline, did you unzip the model to /clara/common/models? This is outlined in the Setup section on NGC here. One reason it may fail is if it is looking for a model that isn’t currently there.

If you fix these problems, redeploy the pipeline, and are still getting errors, you will need to look a little further into the pod details. You can use commands like ‘kubectl describe pod trt-pod-id’, or ‘kubectl logs trt-pod-id’ in order to see what is going on in detail. You could also do a command like ‘kubectl get deployments’, then find the trtis deployment and run ‘kubectl describe deployment trt-deploy-id’. In all of these commands, replace italicized id’s with whatever is shown by kubectl.

These commands should help you figure out what’s going on. Please let me know if this gives you some better insight. For what it’s worth, the documentation on NGC is quite extensive so I would make sure you go step-by-step through that again and make sure there weren’t any missed steps.

Thanks. I will check this and get back to you

1 Like

Hi,
We were able to solve the problem of CrashLoop Back Off as the model was not unzipped into the correct place. But, we were not able to still see the Argo dashboard. Can you please guide us what would be the problem. The API version is set to 0.3.0 in brain-tumor-pipeline.yaml . The storescu command was run in the output directory and the DCM output files are present in input/dcm. Are these the correct directories .Also in the dicom-server-config.yaml ,we have changed the IP address to the local IP address and have also changed the pipeline id.
Please let us know if we have missed something.

When API version is set to 0.3.0, the Argo dashboard should show data for that pipeline after it is executed. But even when no pipelines are present, you should be able to login to Argo and view the GUI. It should be on the Amazon instance IP address that you are deploying Clara onto…which looks like it would be 10.0.0.58:8000. Can you confirm this is where you looked?

When you run the pipeline, can you confirm that you receive DICOM output in the directory you are using the storescu command? If you are receiving outputs then everything in your .yaml file is correct. The IP addresses and pipeline IDs you give in the .yaml file only affect the DICOM adapter - again, you should still be able to see Argo anytime Clara Deploy platform is running regardless of individual instantiated pipelines.

No, we are not able to login to the GUI without the pipeline also. When we look at 10.0.0.58:8000 , we get connection timed out. Can you tell us what the problem could be.

Maybe we need to step back a bit and look at the more important things for the pipeline, before diving into the Argo UI issue.

So as I gathered reading through the thread,

  • the pipeline succeeded into that the resultant DICOM instances were received at the DICOM listener you specified, i.e. the DCMTK service running on local. Please open the series up and check the result.
  • if you have not, please access the Clara Rendering UI, :8080, look for the dataset for you pipeline, and then see the rendering of the brain seg.
  • if both above are working, Argo UI is not a big concern as it only serves to show the pipeline execution status, which you can manually grab from the container logs using K8 kubectl logs <container/pod>. In any case, the Argo UI is at :8000.
  • Another bit of information, going forward in the next release, Argo will be deprecated, and native Clara Deploy orchestration mode will be the only mode.

Hope this helps.
MQ