Liver Segmentation Workflow - Clara Deploy - View rendered output image

Hello Everyone,

  1. I am trying to run the liver segmentation workflow from Readme.pdf doc. My understanding of this workflow is when we provide DICOM series of abdominal images, the segmentation model by Nvidia clara helps us by segmenting the liver portion from the abdomen.

  2. Am I right to understand that input will be a series of DICOM images (all associated) of abdomen?

  3. Usually humans load it in a viewer and explore the image in different orientation to mark the region of interest? I believe this is done using dense networks. Am I right? And I can access the details of each container through the Table (port address) provided in doc. Is my understanding right? localhost:50051 should show me details about DICOM writer. I guess that’s the use of table section

  4. Where are the input and output files stored? I know that we have test-data folder where there are images for abdominal study…Where is this path configured for the job to pick up the data? I don’t see any path info in “values.yaml” file and where is output image stored? Under clara-io folder, I only see the payload id (the id starts with 0000…) and corresponding input files for dicom-reader,writer etc but I don’t see anything in the “output” folder of dicom-writer whereas I see an .mhd and .raw files in the “input” folder of the dicom-writer. Absence of data under “OUTPUT” of dicom-writer is the reason for being not able to view the output? Can you let me know how this can be fixed?

  5. How can I view the output - Organ marked as Liver? When I opened Clara Dashboard, there were no jobs, no rendered output even though I successfully executed the Clara-wf command as shown below. In AIAA, we had connection between MITK Client and AIAA server, so we were able to see.

I am following the guide steps and not sure whether I am missing any prerequisite here. For exploring reference/sample workflow, should I be doing anything by following chapter 6 (Clara containers) or just sticking to chapter 5 (Workflow Development Guide) should be suffice to see end-end workflow?

Please refer the attachment for the output that I got. Can you help me understand whether there is any issue with it? I was able to see that Startup, prepare,execute and cleanup was successful

./clara-wf test fd3ee8bf-b9f3-4808-bd60-243f870ff9bd

Your response would definitely help me gain a better understanding. Thank you

1 Like

Hi
Thanks for trying out Clara deploy sdk and for your questions. Please find my answers below:

  1. Correct the liver segmentation workflow segments the liver from an abdomen CT series.

  2. Correct

  3. Clara SDK uses kubernetes (k8s) and helm. These tables show communications with in a pod between clara components. You don’t need to worry about monitoring any of those ports. You should simply send dicoms to Clara’s AE title and that should trigger the correct workflow for you.

  4. There are multiple questions here and multiple scenarios for what happens, I will try to break it up.

  • clara stores/exec workflows using /clara-io folder
  • Running run_liver_docker.sh would test the liver segmentation model. In this step you are only testing the model and checking if it was packaged correctly. All internal models use mhd files. therefore the input is located in clara-reference-app/input and the container would produce the output at clara-reference-app/output. There is no Dicoms in this step. For visualization you could use any viewer that opens mhd as mitk, itk snap, or 3d slicer
  • Running clara-wf test cmd is for you to test triggering your workflow. For this you should have created a workflow id and configured it. This cmd will copy the clara-reference-workflow/test-data/00000000-0000-0000-0000-000000000000 into the /clar-io-payload/ then start your workflow.
    1. To see the results you should use mitk or itksnap or 3d slicer. if you are looking for a full workflow demo this would require a way to send dicoms as dcmtk (cmd line tool) or a Orthanc (free pacs server). If that is the case please let me know and I can answer that separately.

    Hello aharouni,

    Thanks for detailed response. Currently, I am more of Clara platform user than a DL/Docker person. So your response is certainly helpful for me to learn and understand things better.I am encountering few issues and would like to seek your/forum users inputs.
    My Aim
    To execute liver segmentation workflow successfully

    1. I executed the “./clara-wf test fd3ee8bf-b9f3-4808-bd60-243f870ff9bd”. Though it executed successfully, I was able to see that the last line of the output in cmd line had a message like “release trtis deleted”. Is it expected? what does this mean?
    2. Am I right to understand that the output of the above command would be images marked /highlighted for liver organ?
    3. I understand the images can be found in output folder. After execution, when I navigated to
      a) “clara-reference-app/output” - it was empty
      b) “/clara-io/clara-core/payloads/00000000-0000-0000-0000-000000000000/dicom-writer/output” - it
      was empty
      c) what’s the difference between the two output folders in 3a) and 3b)? When do we see data in them?
      d) why are they empty and how do I see the output files?
      e) When I executed run_liver_local.sh, the output was just stuck (loading for more than an 90 mins) with this message “Wait until TRTIS is ready”. Are the output folders empty because of this reason?
    4. Under payloads folder for the jobid given above, I was able to see 3 more folders apart from stages (dicom-reader,ai-livertumor,dicom-writer) like ai-vnet,locks and recon. Can you please let me know what are they?
    5. Am I right to understand that images are picked from some local path as we don’t have integration with PACS server for reference/sample workflow?
      a) If yes, where is this path configured? In “dicom-server” field of values.yaml file?
      b) Should I wish to fetch images directly from PACS server, how do we set this up? Is there any sample file that you can share with us?
    6. Finally, with respect to viewing the output images (liver segmented).I should be using MITK but how do I establish the connection between Clara SDK and MITK. I understand that MITK has connection pane to provide the server details.
      a) For example, When I was using NVIDIA AIAA, I knew the IP address where AIAA server was running, so I keyed in the details in “Connection pane” of MITK. I downloaded MITK and sample spleen dataset in my local desktop. Once I upload the input spleen image to MITK and click on “NVIDIA segmentation”, it makes calls to AIAA server to guide/assist me in marking the region of interest.
      How do we establish this connection here? We are running the Clara Deploy in our remote-gpu server system. If I have to key in the IP address of my remote server, how can I view the output as the output folders are empty.
      b) What’s the use of ITAdmin and render server then? Just to view the job status

    Thanks for your time and inputs

    Hi

    Thank you for your questions. Please find my answers below

    1. ./clara-wf tool is to test a workflow while you are creating it. The tool has a bug of only triggering first stage of a workflow. we are working on fixing/expanding this.

    2. If you want to run a work flow please go t o clara-reference-app then run ./run_liver_docker.sh it would take the mhd inputs from input folder and produce mhd of the liver segmentation in the clara-reference-app/output folder, you can then use mitk to view the results

    3. please run the docker scripts for liver and vnet instead of clara-wf cmd

    If you would like to send dicoms to trigger a workflow, you would need to manually push dicoms, either from a directory using dcmtk or from a pacs like Orthanc (you can run this inside a docker for testing)

    Hello aharouni,

    Yes, I was able to execute and see the output segmented image in MITK. Thanks a ton for your patience in answering my queries.

    Currently, I download the output images from remote server to my local desktop where MITK is installed to view the ouput (segmented images). Is there anyway to view the output online like in a webviewer? Render server?

    Hi

    Yes there is.

    The end to end demo would include:
    1- Install a pacs system like orthanc or dcm4chee. You can run both as a docker container. I would recommend orthanc as it has an easy UI to upload dicoms instead of sending them through command line
    a- For Orthanc you can follow instructions http://book.orthanc-server.com/users/docker.html
    b- For DCM4CHEE, please follow instructions https://github.com/dcm4che/dcm4chee-arc-light/wiki/Run-minimum-set-of-archive-services-on-a-single-host
    2- Set up a web viewer as ohif http://ohif.org/ or oviyam https://dcm4che.atlassian.net/wiki/spaces/OV/pages/3375111/Oviyam+Installation to connect to your pacs.
    3- Trigger workflows from the pacs.

    For a step by step instructions to setup this you should:

    1.Copy reference models to clara working dir

    sudo cp -r <full path of folder>/test-data/models/* /clara-io/models/
    

    2.publish ref workflows

    cd clara-reference-workflow
      sudo ./clara-wf publish_chart 1db65f99-c9b7-4329-ab9c-d519e0557638 "CT Organ seg" /clara-io/clara-core/workflows/
      sudo ./clara-wf publish fd3ee8bf-b9f3-4808-bd60-243f870ff9bd "LiverSeg" /clara-io/clara-core/workflows/
    

    3.change ip for ORTHANC
    a.open clara-platform/files/dicom-server-config.yaml and change host-ip to your current m/c ip
    b.change source ae-title to ORTHANC
    c.change port to 4242

    4.Restart clara to load the workflows
    a.stop clara

    sudo helm delete clara --purge
    

    b.wait until clara terminates

    watch -n 2 kubectl get pods
    

    c.re-deploy clara by going to clara folder

    sudo helm install ./clara-platform -f ./scripts/clara-helm-config.yaml  -n clara
    
    1. install and run Orthanc (cmds from cmds from http://book.orthanc-server.com/users/docker.html)
      1.Print a json config
    docker run --rm --entrypoint=cat jodogne/orthanc /etc/orthanc/orthanc.json >  <yourLocalFolder4othenac>/orthanc.json
    

    edit config file to 2 lines below under the “DicomModalities” section after the commented example of

    // "clearcanvas" : [ "CLEARCANVAS", "192.168.1.1", 104, "ClearCanvas" ]
    "clara-liver" : [ "LiverSeg", "yourIPaddress", 104 ],
    "clara-ctseg" : [ "OrganSeg", "yourIPaddress", 104 ]
    

    2.Start orthanc

    docker run -p 4242:4242 -p 8042:8042 --rm --name orthanc -v <yourLocalFolder4othenac>/orthanc.json:/etc/orthanc/orthanc.json -v <yourLocalFolder4othenac>/orthanc-db:/var/lib/orthanc/db jodogne/orthanc-plugins /etc/orthanc --verbose
    

    3.open web browser http://localhost:8042
    4.upload a couple of dicom studies of abdomen CT
    5.go into patient study  series then select send to dicom modality then select clara-ctseg or clara-liverseg

    This will trigger clara workflow and you should receive the dicoms back in orthanc. All is left is integrating the viewer as ohif or oviyam

    Hello aharouni,

    I tried uploading ‘I000.dcm file’ from test-data present under clara-reference-worklfow folder and clicked on “Send to DICOM modality” -->Clara-ctseg. Though the upload was successful,But I got the below error. Can you please help me understand what’s the issue here?

    E0528 10:29:18.865254 OrthancException.h:85] Error in the network protocol: 
    DicomUserConnection - connecting to AET "OrganSeg": 
    Failed to establish association (0006:0317 Peer aborted Association 
    (or never connected); 0006 :031c TCP Initialization Error: Connection refused)
    

    The above error was found at the Orthanc server window whereas in the web-client, I had an error message "Error during store"

    Please find below the change that I made in orthanc.json file

    * This parameter is case-sensitive.
         **/
        // "clearcanvas" : [ "CLEARCANVAS", "192.168.1.1", 104, "ClearCanvas" ]
           "clara-liver":[ "LiverSeg", "172.xx.xxx.xxx", 104],
           "clara-ctseg":[ "OrganSeg", "172.xx.xxx.xxx", 104]
        /**
         * By default, the Orthanc SCP accepts all DICOM commands (C-ECHO,
         * C-STORE, C-FIND, C-MOVE) issued by the registered remote SCU
    

    As you can see, I have updated those two lines as per the steps above

    I have also provided my dicom-server.config file

    dicom:
      scp:
        port: 104
        ae-titles:
          - ae-title: OrganSeg
            processors:
              - "Nvidia.Clara.Dicom.Processors.JobProcessor, Nvidia.Clara.DicomServer"
          - ae-title: LiverSeg
            processors:
              - "Nvidia.Clara.Dicom.Processors.JobProcessor, Nvidia.Clara.DicomServer"
          - ae-title: CT_AI
            processors:
              - "Nvidia.Clara.Dicom.Processors.JobProcessor, Nvidia.Clara.DicomServer"
        max-associations: 2
        verification:
          enabled: true
          transfer-syntaxes:
            - "1.2.840.10008.1.2" #Implicit VR Little Endian
            - "1.2.840.10008.1.2.1" #Explicit VR Little Endian
            - "1.2.840.10008.1.2.2" #Explicit VR Big Endian
        log-dimse-datasets: false
        reject-unknown-sources: true
        sources:
          - host-ip: 172.xx.xxx.xxx
            ae-title: ORTHANC
      scu:
        ae-title: ClaraSCU
        max-associations: 2
        destinations:
          - name: ORTHANC
            host-ip: 172.xx.xx.xxx
            port: 4242
            ae-title: ORTHANC
    
    workflows:
      - name: organ-seg
        clara-ae-title: OrganSeg
        destination-name: ORTHANC
        workflow: 1db65f99-c9b7-4329-ab9c-d519e0557638
      - name: liver-seg
        clara-ae-title: LiverSeg
        destination-name: ORTHANC
        workflow: fd3ee8bf-b9f3-4808-bd60-243f870ff9bd
      - name: user-workflow
        clara-ae-title: CT_AI
        destination-name: DCM4CHEE
        workflow: d33d3356-e853-496f-a40c-6cf271a12a55
    
    storage:
        output: dicom-writer/output
        payloads: /payloads
    

    Please note that, docker is running in our remote server (host for docker). So orthanc web client is open in my local desktop via port forwarding (172.xx.xxx.xxx:8042 (remote server ip address) is forwarded to localhost:8222). Is there anything that I need to follow for this?

    Kindly request you to guide me as to how can I fix this issue.

    Hello aharouni,

    Can you help us with the above issue?

    Hi

    It sound like the port forwarding is causing an issue. I am not sure why you need this, you could simply for testing have both clara and orthanc in a docker running on one server then access orthanc from any other pc.

    inorder to see errors from the clara’s dicom server check the logs by

    kubectl logs clara-clara-platform-xxxxx dicom-server
    

    where xxxxx is the random number k8s provided to you

    If you have to do port forwarding and If your orthanc is not on the same server as clara then in the orthanc.json you should point to the clara server ip

    "clara-liver":[ "LiverSeg", "<ip where clara is installed>", 104]
    

    then in dicom-server.config you should point to the orthanc server ip

    sources:
          - host-ip: <ip where orthanc is installed>
            ae-title: ORTHANC
      scu:
        ae-title: ClaraSCU
        max-associations: 2
        destinations:
          - name: ORTHANC
            host-ip: <ip where orthanc is installed>
            port: 4242 --> make sure this is also forwarded 
            ae-title: ORTHANC
    

    you should also make sure that the port 4242 is forwarded so you can access it from the server where clara is installed. Once you fixed this you should send a full series not single images. you could also get abdomen CT images from https://imaging.nci.nih.gov/nbia-search/

    Alternatively just to eliminate the port forwarding and try to install docker on the same server as clara or another simple way to try is to install a cli tool to send and receive dicoms by following hte steps below:

    1-Install dcmtk prg to send and receive dicoms

    sudo apt-get install dcmtk
    

    2-Open new terminal to run a thread to receive dicoms

    Create an new dir: $mkdir <DICOM destination folder>
    cd <DICOM destination folder>
    sudo storescp -v --fork -aet ORTHANC 4242
    

    3-Send DICOM data to trigger one of the workflows, either of:

    storescu -v +sd +r -xb -v -aet "DCM4CHEE" -aec "OrganSeg" <clara ip> 104 /folder/with/sample/DICOM/
    storescu -v +sd +r -xb -v -aet "DCM4CHEE" -aec "LiverSeg" <clara ip> 104 /folder/with/sample/DICOM/
    

    4-go to the destination folder and check out the output

    Hope this helps

    Hello Aharouni,

    We have both clara and orthanc running in the same server i.e. 172.xx.xxx.xxx. However as it is running in remote system, we don’t have any UI to view the orthanc web client. Hence I forward both the ports 8042 and 4242 to local ports where I can view them from my local system.

    So I have given the same IP in both case 1 and case 2 as shown below

    case 1)

    "clara-liver":[ "LiverSeg", 172.xx.xxx.xxx, 104]
    

    case 2)

    sources:
          - host-ip: <ip where orthanc is installed>
            ae-title: ORTHANC
      scu:
        ae-title: ClaraSCU
        max-associations: 2
        destinations:
          - name: ORTHANC
            host-ip: 172.xx.xxx.xxx
            port: 4242 --> this is also forwarded 
            ae-title: ORTHANC
    

    I tried uploading all the images of a series together and it was the same error. However, I managed to get the logs and was able to see the below error message

    2019-05-31 10:22:29.731 +00:00 [INFO] [clara-clara-platform-5db84f9d75-5qspr] Nvidia.Clara.Dicom.Program[1] {} Initialize application with /app/app.yaml
    2019-05-31 10:22:30.106 +00:00 [EROR] [clara-clara-platform-5db84f9d75-5qspr] Nvidia.Clara.Dicom.Configuration.ConfigurationValidator[1] {} Specified destination-name 'DCM4CHEE' cannot be found in workflow 'workflows>user-workflow>destination-name'
    2019-05-31 10:22:30.107 +00:00 [FATL] [clara-clara-platform-5db84f9d75-5qspr] Nvidia.Clara.Dicom.Program[1] {} Invalid DICOM configuration.
    

    Since we aren’t using DCM4CHEE, I have removed them from my dicom-server.config file and clara-reference-workflow/charts/clara-workflow/values.yaml file. Please find the file below for your reference

    1. Dicom-server.config file
    dicom:
      scp:
        port: 104
        ae-titles:
          - ae-title: OrganSeg
            processors:
              - "Nvidia.Clara.Dicom.Processors.JobProcessor, Nvidia.Clara.DicomServer"
          - ae-title: LiverSeg
            processors:
              - "Nvidia.Clara.Dicom.Processors.JobProcessor, Nvidia.Clara.DicomServer"
    
        max-associations: 2
        verification:
          enabled: true
          transfer-syntaxes:
            - "1.2.840.10008.1.2" #Implicit VR Little Endian
            - "1.2.840.10008.1.2.1" #Explicit VR Little Endian
            - "1.2.840.10008.1.2.2" #Explicit VR Big Endian
        log-dimse-datasets: false
        reject-unknown-sources: true
        sources:
          - host-ip: 172.xx.xxx.xxx
            ae-title: ORTHANC
      scu:
        ae-title: ClaraSCU
        max-associations: 2
        destinations:
          - name: ORTHANC
            host-ip: 172.xx.xxx.xxx
            port: 4242
            ae-title: ORTHANC
    
    workflows:
      - name: organ-seg
        clara-ae-title: OrganSeg
        destination-name: ORTHANC
        workflow: 1db65f99-c9b7-4329-ab9c-d519e0557638
      - name: liver-seg
        clara-ae-title: LiverSeg
        destination-name: ORTHANC
        workflow: fd3ee8bf-b9f3-4808-bd60-243f870ff9bd
    
    1. Values.yaml file (Clara-reference-workflow/clara-workflow/charts/values.yaml file)
    # Default values for clara-workflow.
    # This is a YAML-formatted file.
    # Declare variables to be passed into your templates.
    
    # Workflow parameters
    ## Value used here should be one of key in 'workflows' object
    workflow:
      name: "ai-only"
      id: "613e3fce-0d96-495a-a760-f43e68deefd8" # Need to specify manually/programmatically
    
    # Job parameters
    job:
      name: "Test Job"                           # Need to specify manually/programmatically
      id: "00000000-0000-0000-0000-000000000000" # Need to specify manually/programmatically
    
    # DICOM Server parameters
    ## Set both values (input & output) to the empty string if you don't want to use DICOM Server
    dicomServer:
      input: "dicom-reader/input"                # First stage's input host path would be "/clara-io/clara-core/payloads/{{job.id}}/dicom-reader/input"
      output: "dicom-writer/output"              # Last stage's output host path would be "/clara-io/clara-core/payloads/{{job.id}}/dicom-writer/output"
    
    # Miscellaneous parameters
    runSimple: "FALSE"                           # Need to specify manually/programmatically
    trtisUri: "localhost:8000"                   # Need to specify manually/programmatically
    
    # Workflow definitions
    ## - Keys that work with DICOM Server should match with workflow IDs used in DICOM Server
    ## - An item in 'args' array should be an array of strings if the arguments exist (If not, specify the empty string)
    workflows:
      1db65f99-c9b7-4329-ab9c-d519e0557638:
        name: "organ-seg"
        stages: ["dicom-reader", "ai-vnet", "dicom-writer"]
        waitLocks: ["", "dicom-reader.lock", "ai-vnet.lock"]
        ioFolders: ["dicom-reader/input", "dicom-reader/output", "ai-vnet/output", "dicom-writer/output"]
        args: ["", "", ""]
      fd3ee8bf-b9f3-4808-bd60-243f870ff9bd:
        name: "liver-seg"
        stages: ["dicom-reader", "ai-livertumor", "dicom-writer"]
        waitLocks: ["", "dicom-reader.lock", "ai-livertumor.lock"]
        ioFolders: ["dicom-reader/input", "dicom-reader/output", "ai-livertumor/output", "dicom-writer/output"]
        args: ["", "", ""]
      1995f10e-ee14-4d67-b307-452051637dbb:
        name: "dicom-reader-only"
        stages: ["dicom-reader"]
        waitLocks: [""]
        ioFolders: ["dicom-reader/input", "dicom-reader/output"]
        args: [""]
      8371285c-8ea8-41cd-958e-44e5664c6ec3:
        name: "dicom-reader-writer"
        stages: ["dicom-reader", "dicom-writer"]
        waitLocks: ["", "dicom-reader.lock"]
        ioFolders: ["dicom-reader/input", "dicom-reader/output", "dicom-writer/output"]
        args: ["", ""]
      613e3fce-0d96-495a-a760-f43e68deefd8:
        name: "ai-only"
        stages: ["ai-vnet"]
        waitLocks: [""]
        ioFolders: ["ai-vnet/input", "ai-vnet/output"]
        args: [""]
      eb66f705-13ea-402c-8baa-143fb3fd9cd5:
        name: "dicom-writer-only"
        stages: ["dicom-writer"]
        waitLocks: [""]
        ioFolders: ["dicom-writer/input", "dicom-writer/output"]
        args: [""]
    
    # Stage definitions
    stages:
    ##BEGIN_ai-vnet##
      ai-vnet:
        image:
          repository: clara/ai-vnet
          tag: "0.1.8"
        mount:
          in:
            name: "input"
          out:
            name: "output"
        stageName: "ai-vnet"
        appDir: "/app"
        inputLock: "/app/locks/input.lock"
        inputs: "input"
        logName: "/app/logs/ai-vnet.log"
        outputs: "output;image.mhd;image.seg.mhd;config_render.json"
        lockDir: "/app/locks"
        lockName: "ai-vnet.lock"
        timeout: "300"
        publishPath: "/publish"
    ##END_ai-vnet##
    ##BEGIN_ai-livertumor##
      ai-livertumor:
        image:
          repository: clara/ai-livertumor
          tag: "0.1.8"
        mount:
          in:
            name: "input"
          out:
            name: "output"
        stageName: "ai-livertumor"
        appDir: "/app"
        inputLock: "/app/locks/input.lock"
        inputs: "input"
        logName: "/app/logs/ai-livertumor.log"
        outputs: "output;image.mhd;image.seg.mhd;config_render.json"
        lockDir: "/app/locks"
        lockName: "ai-livertumor.lock"
        timeout: "300"
        publishPath: "/publish"
    ##END_ai-livertumor##
    ##BEGIN_dicom-reader##
      dicom-reader:
        image:
          repository: clara/dicom-reader
          tag: "0.1.8"
        mount:
          in:
            name: "input"
          out:
            name: "output"
        stageName: "dicom-reader"
        appDir: "/app"
        inputLock: "/app/locks/input.lock"
        inputs: "input"
        logName: "/app/logs/dicom-reader.log"
        outputs: "output"
        lockDir: "/app/locks"
        lockName: "dicom-reader.lock"
        timeout: "300"
        publishPath: ""
    ##END_dicom-reader##
    ##BEGIN_dicom-writer##
      dicom-writer:
        image:
          repository: clara/dicom-writer
          tag: "0.1.8"
        mount:
          in:
            name: "input"
          out:
            name: "output"
        stageName: "dicom-writer"
        appDir: "/app"
        inputLock: "/app/locks/input.lock"
        inputs: "payloads/dicom-reader/input;input"
        logName: "/app/logs/dicom-writer.log"
        outputs: "output"
        lockDir: "/app/locks"
        lockName: "dicom-writer.lock"
        timeout: "300"
        publishPath: ""
    ##END_dicom-writer##
    ##BEGIN_user-ai##
      user-ai:
        image:
          repository: user-ai
          tag: latest
        mount:
          in:
            name: "input"
          out:
            name: "output"
        stageName: "user-ai"
        appDir: "/app"
        inputLock: "/app/locks/input.lock"
        inputs: "input"
        logName: "/app/logs/user-ai.log"
        outputs: "output"
        lockDir: "/app/locks"
        lockName: "user-ai.lock"
        timeout: "300"
        publishPath: "/publish"
    ##END_user-ai##
    ##STAGE_MARKER##
    
    nameOverride: ""
    fullnameOverride: ""
    
    resources: {}
      # We usually recommend not to specify default resources and to leave this as a conscious
      # choice for the user. This also increases chances charts run on environments with little
      # resources, such as Minikube. If you do want to specify resources, uncomment the following
      # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
      # limits:
      #  cpu: 100m
      #  memory: 128Mi
      # requests:
      #  cpu: 100m
      #  memory: 128Mi
    
    nodeSelector: {}
    
    tolerations: []
    

    Can you please let me know if I have made any mistakes in dicom-server config file?

    1. Can you also let me know how do we incorporate our model in clara deploy? For instance, right now we have a simple segmentation model written in pytorch and would like to import this into claradeploy. Can you please share us the steps? Regarding the same, I created a post long back (https://devtalk.nvidia.com/default/topic/1052489/clara-deploy-sdk-new-/how-to-run-execute-our-own-segmentation-model-clara-deploy/) but couldn’t get any response. Can you please help me figure this out?

    Hi

    Lets keep this thread for the clara demo. There are 2 separate issues you are facing:

    1- Clara dicom server issue
    In your dicom config file you have the lines below where there is a DCM4CHEE reference without declaring it, that is causing the error, please change it to ORTHANC

    - name: user-workflow
        clara-ae-title: CT_AI
        destination-name: <u><b><i>DCM4CHEE  <<---- change this to ORTHANC</i></b></u>
        workflow: d33d3356-e853-496f-a40c-6cf271a12a55
    

    you may need to restart clara for your changes to apply. Please refer to clara restart instructions I mentioned before.

    2 - Getting Orthanc to work which is unrelated to Clara.
    if you have orthanc from a docker there is no need to do any port forwarding the docker already exposes port 4242 and the ui on 8042 so you just map it with your docker run cmd

    docker run -p 4242:4242 -p 8042:8042 --rm --name orthanc -v <yourLocalFolder4othenac>/orthanc.json:/etc/orthanc/orthanc.json -v <yourLocalFolder4othenac>/orthanc-db:/var/lib/orthanc/db jodogne/orthanc-plugins /etc/orthanc --verbose
    

    if you are already using either these ports on your server you could just change them and these new ports are the ones you should use in you clara config files. one you have orthanc up, verify you can upload dicom images from the ui.

    Hope this helps

    removed

    Hello aharouni,

    I managed to resolve most of the issues and am able to send the images to DICOM Modality without any issues. However, can you let me know how to view the processed/segmented outputs?

    I mean, OHIF viewer doesn’t have a installer yet, so I am not able to download it.

    Second, will we be able to use ‘oviyam’ viewer for Orthanc?

    Anyway I tried the inbuilt “Orthanc web viewer” but it displayed an error message that “Image is not supported by the web viewer”.

    Again, I also tried osimis viewer but though the images were loaded successfully, there wasn’t any graphical data (segmented region) as output. I was only able to see the Patient Id, Study Id and all those text information. I am able to navigate/switch to each image but I didn’t find any output. All the 147 images had only text info.

    Is it to do with Clara output? But based on the log message below, I thought it executed successfully

    I0607 09:43:46.362931 DicomModalityStoreJob.cpp:60] Sending instance 2e93c582-55d740f0-0a390fbb-2f1493ac-84b0bdd5 to modality "OrganSeg"
    I0607 09:43:46.362995 FilesystemStorage.cpp:155] Reading attachment "b05692ef-e9da-43c9-ac60-1aaab647ce34" of "DICOM" content type
    I0607 09:43:46.366511 DicomModalityStoreJob.cpp:60] Sending instance e0d7deb6-97c491a9-05c9a937-bed0a721-5020a5ea to modality "OrganSeg"
    I0607 09:43:46.366576 FilesystemStorage.cpp:155] Reading attachment "b7f9911c-a30e-4878-8fc0-a9c1fc28a8aa" of "DICOM" content type
    I0607 09:43:46.370171 DicomModalityStoreJob.cpp:60] Sending instance 6ac38bb6-c0bd2701-98f47387-d9e4d458-6b712d6e to modality "OrganSeg"
    I0607 09:43:46.370231 FilesystemStorage.cpp:155] Reading attachment "d01fa4e8-bb14-4bce-9f6e-acf759de7fc0" of "DICOM" content type
    I0607 09:43:46.375008 DicomModalityStoreJob.cpp:60] Sending instance 941f7192-32aaf679-602cb0ed-3c281d2f-0065886b to modality "OrganSeg"
    I0607 09:43:46.375072 FilesystemStorage.cpp:155] Reading attachment "61d5235f-8922-47d8-8d1f-42ffbb58a5aa" of "DICOM" content type
    I0607 09:43:46.379556 DicomModalityStoreJob.cpp:60] Sending instance 4d77ed5b-6a96927d-e2bccd6d-2adab483-1c8973a7 to modality "OrganSeg"
    I0607 09:43:46.379615 FilesystemStorage.cpp:155] Reading attachment "c9bc42a0-0c9d-4fad-812e-6e8a9afdb634" of "DICOM" content type
    I0607 09:43:46.386127 DicomModalityStoreJob.cpp:60] Sending instance 0ca42183-31fb5ed1-e68eaa52-7d0f3a89-4c0f7f54 to modality "OrganSeg"
    I0607 09:43:46.386187 FilesystemStorage.cpp:155] Reading attachment "b39f8848-2648-4aea-9dbb-5df59249731c" of "DICOM" content type
    I0607 09:43:46.389856 DicomModalityStoreJob.cpp:60] Sending instance 6ece4e5e-69650fc8-391fc82a-0356fddb-0f06dfc2 to modality "OrganSeg"
    I0607 09:43:46.389918 FilesystemStorage.cpp:155] Reading attachment "ce5ab965-360b-4fd4-b6d6-e45109135ed3" of "DICOM" content type
    I0607 09:43:46.394767 DicomModalityStoreJob.cpp:60] Sending instance ff86262d-5378f2e7-c31a6e4b-f4d02551-c913c943 to modality "OrganSeg"
    I0607 09:43:46.394865 FilesystemStorage.cpp:155] Reading attachment "0bf38b96-63c0-4edf-bea4-1d309eebd65a" of "DICOM" content type
    I0607 09:43:46.398896 JobsRegistry.cpp:486] Job has completed with success: 2ed3820c-1478-44de-b6c3-b9d300f92147
    

    I selected all the dicom-images (series) from CT-abdominal data. Does the above logs indicate that the processing/segmentation is successful or its only the reading of images is successful?

    Looking forward to your response

    Hi
    Congratulations it looks like your workflow is completed successfully! just confirming you should get a new series with the same orignal name with “processed by clara”. As for web viewers you could use either OHIF or Oviym to connect to either orthanc or dcm4chee.

    For OHIF, They recently updated the github and moved the docker files here https://github.com/OHIF/Viewers/tree/master/docker

    I used an older simpler docker files as follows:
    create a file named docker-compose.yml with content below and change the line pacsIP:yourorthancIPhere

    version: '3.6'
    services:
      mongo:
        image: "mongo:latest"
        container_name: ohif-mongo
        ports:
          - "27017:27017"
    
      viewer:
        image: ohif/viewer:latest
        container_name: ohif-viewer
        ports:
          - "3030:3000"
        links:
          - mongo
        environment:
          - MONGO_URL=mongodb://mongo:27017/ohif
        extra_hosts:
          - "pacsIP:yourorthancIPhere"
        volumes:
          - ./dockersupport-app.json:/app/app.json
    

    and another file named dockersupport-app.json

    {
      "apps" : [{
        "name"        : "ohif-viewer",
        "script"      : "main.js",
        "watch"       : true,
        "merge_logs"  : true,
        "cwd"         : "/app/bundle/",
        "env": {
        	"METEOR_SETTINGS": {
    		  "servers": {
    		    "dicomWeb": [
    					{
    		        "name": "Orthanc",
    		        "wadoUriRoot": "http://pacsIP:8042/wado",
    		        "qidoRoot": "http://pacsIP:8042/dicom-web",
    		        "wadoRoot": "http://pacsIP:8042/dicom-web",
    		        "qidoSupportsIncludeField": false,
    		        "imageRendering": "wadouri",
    		        "thumbnailRendering": "wadouri",
    		        "requestOptions": {
    		          "auth": "orthanc:orthanc",
    		          "logRequests": true,
    		          "logResponses": false,
    		          "logTiming": true
    		        }
    		      }
    		    ]
    		  },
    		  "defaultServiceType": "dicomWeb",
    		  "public": {
    				"ui": {
    					"studyListDateFilterNumDays": 1
    				}
    			},
    		  "proxy": {
    		    "enabled": true
    		  }
    		}
        }
      }]
    }
    

    For you second question regarding oviyam. Yes you could have oviyam and ohif both running and look into the images from orchanc. If you are interested, I could provide you with docker files for oviyam to get it up and running.

    Hello aharouni,

    I tried your code (older simpler docker file and dockersupport-app.json) but through this we aren’t able to access Orthanc. OHIF-Viewer exposes only port 80 and 443. So, I updated the docker-compose file to look like as shown below

    Your docker-compose.yml with update in port address alone

    version: '3.6' 
    services: 
    mongo: 
    image: "mongo:latest" 
    container_name: ohif-mongo 
    ports: 
    - "27017:27017" 
    viewer: 
    image: ohif/viewer:latest 
    container_name: ohif-viewer 
    ports: 
    - "3030:80"  # <b>see the update here</b>
    network_mode: "host" 
    volumes: 
    - ./dockersupport-app.json:/app/app.json
    

    But even after this, we couldn’t launch Orthanc app itself to even upload images and send to clara

    So later I tried again with the updated link which you provided (https://github.com/OHIF/Viewers/tree/master/docker) and modified my docker-compose.yml to look like as shown below

    Modified - Docker-compose.yml -

    I tried 2 variants of the below code. 1st variant is as shown below and 2nd variant is as shown below except the commented section - “image” , “ports” and “volumes” line

    version: '3.6'
    
    services:
      proxy:
        image: nginx:1.15-alpine    #2nd variant is I replaced image name with "ohif/viewer:latest"
        ports:
          - "8899:80"               #2nd variant is I replaced the port number with "3030:80"
        network_mode: "host"
        volumes:                    #2nd variant - No volumes section
          - ./config/nginx.conf:/etc/nginx/nginx.conf:ro
        restart: unless-stopped
    
    orthanc:
        image: jodogne/orthanc-plugins:1.5.6
        hostname: orthanc
        ports:
          - "4242:4242" # DICOM
          - "8042:8042" # Web
        network_mode: "host"
        volumes:
          - ./config/orthanc.json:/etc/orthanc/orthanc.json:ro
          - ./config/orthanc-db/:/var/lib/orthanc/db/
        restart: unless-stopped
    

    With this, I am able to view the Orthanc explorer to upload the images and send it to Clara (send to dicom modality). But when I access port 8899/3030 in browser, we don’t see anything. Its an error…nothing available there

    Can you please let us know whether am using the right image and port address for OHIF-Viewer. Can OHIF-Viewer be accessed from different port in a browser or it’s within Orthanc web explorer?

    http://localhost:8042 - Orthanc Explorer - Able to upload images and send to Dicom modality

    http://localhost:8899/3030 - OHIF Viewer - Error. No webpage with this.

    Can you please guide/walk us through on how to include viewer to our Orthanc explorer. We are already close but somehow encountering one or the other issue. Can you let us know whether it is working at your end with the above config?

    1. Another unusual issue is, I was able to see a message when I start Orthanc that “Using GDCM instead of the DICOM decoder that is built in Orthanc”, because of which the clara output is failing which is something new

    We require your help in connecting OHIF Viewer to Orthanc.

    Looking forward to hear from you

    Can you provide us oviyam viewer instructions? We can try that as well.

    Can anyone help us with this issue and let us know whether you are experiencing the same issues as well?

    Hi

    In order to track down the issue, you should separate pacs form the viewer. If you could have separate dockers for each, it would simplify the debugging and you can switch between different implementations. I recommend you:

    1. Have a separate dockers for pacs either Orthanc or dcm4chee or both
    2. Test work flow with sending dicoms to clara and receiving results back
    3. Have separate dockers for viewers like OHIF or oviyam or both
    4. work on connecting the viewer with the running pacs

    I already have both pacs running (orthanc and dcm4chee) and both viewers (ohif and oviyam) and can trigger clara for either pacs and see the results form either viewers.

    Could you break this down and let me know where are you stuck ?

    1. Orthanc please follow steps above or here https://docs.nvidia.com/clara/deploy/public/ClaraSDK_UserGuide.html#running-a-demonstration
    2. dcm4chee follow instructions docker-compose at the bottom of this page https://github.com/dcm4che/dcm4chee-arc-light/wiki/Run-minimum-set-of-archive-services-on-a-single-host
    3. set up OHIF using docker-compose without orthac service. you need to connect to the existing orthanc or dcm4chee
    4. setup oviyam following the steps below

    oviyam setup

    create new file named Dockerfile

    FROM tomcat:7.0.91-jre7
    
    RUN apt-get install curl unzip
    
    WORKDIR  /
    
    RUN mkdir ovitmp && \
        cd ovitmp && \
        curl https://iweb.dl.sourceforge.net/project/dcm4che/Oviyam/2.7.1/Oviyam-2.7.1-bin.zip > oviyam.zip && \
            unzip oviyam.zip
            
    RUN rm -R /usr/local/tomcat/webapps/ROOT/
    RUN cp /ovitmp/Oviyam-2.7.1-bin/Oviyam-2.7.1-bin/oviyam2.war /usr/local/tomcat/webapps/ROOT.war
    RUN cp /ovitmp/Oviyam-2.7.1-bin/tomcat/*.jar  /usr/local/tomcat/lib
    COPY tomcat-users.xml /usr/local/tomcat/conf/tomcat-users.xml
    

    create file named tomcat-users.xml and change username/password if you like

    <?xml version='1.0' encoding='utf-8'?>
    <!--
      Licensed to the Apache Software Foundation (ASF) under one or more
      contributor license agreements.  See the NOTICE file distributed with
      this work for additional information regarding copyright ownership.
      The ASF licenses this file to You under the Apache License, Version 2.0
      (the "License"); you may not use this file except in compliance with
      the License.  You may obtain a copy of the License at
    
          http://www.apache.org/licenses/LICENSE-2.0
    
      Unless required by applicable law or agreed to in writing, software
      distributed under the License is distributed on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
      See the License for the specific language governing permissions and
      limitations under the License.
    -->
    <tomcat-users>
      <role rolename="tomcat"/>
      <role rolename="admin"/>
      <role rolename="manager-gui"/>
      <user username="tomcat" password="cattom" roles="manager-gui, manager-script, manager-status, manager-jmx"/>
      <user username="admin" password="admin" roles="admin"/>
    </tomcat-users>
    

    build your docker

    docker build -t oviyam:2.7.1 .
    

    start your docker, change data location and ports if you already have conflicts

    docker run -it --rm --name oviyam \
      -p 8081:8080 -p 1025:1025 \
      -v <localpath>/data/:/usr/local/tomcat/work \
      oviyam:2.7.1
    

    Hello Aharouni,

    Thanks for answering my questions. Here are the issues that I am facing. I would like to divide my response into 4 simple categories, so it would be easy for you to understand

    1) Trying to run just clara deploy - No viewer just uploading images and unable to send to DICOM-Modality

    a) Actually am not able to see the DICOM modalities in Orthanc? I mean clara-liverseg and clara-ctseg. All these days when I start the container using the below command, I would see those two options when I open Orthanc and upload images but they are missing now. Refer screenshot attached. I did update the details in dicom-server-config file restarted/redployed clara as well several times. The below command is what I used to start Orthanc server

    docker run --net=host -p 4242:4242 -p 8042:8042 --rm --name orthanc -v $(pwd)/orthanc/config/orthanc.json:/etc/orthanc/orthanc.json -v $(pwd)/orthanc/config/orthanc-db:/var/lib/orthanc/orthanc-db jodogne/orthanc-plugins /etc/orthanc --verbose
    

    b) Another issue is as soon as I upload the image files and look up for the study, I see the header of the study as “Processed by clara”. Not sure what’s happening. I didn’t have those two modalities option to click but how come it displays as processed by clara. I deleted the study and re-uploaded but still it shows as “Processed by Clara”.Refer screenshot attached

    c) When I issue the command to view the logs of dicom server,I only see the same logs again and again. I mean it is only fixed set of lines and it exits automatically. Doesn’t capture whatever I do in Orthanc. Not sure what’s happening. Last week, I could see logs based on my activity. Refer screenshot attached

    Can you help us understand what’s the issue here?

    2) New method to start Orthanc and OHIF Viewer - How to connect with Clara

    From OHIF community I found a easy method to start Orthanc and OHIF viewer through yarn method. I did the below. You can follow it too and let me know whether this works

    a) Download the OHIF viewer git repository (https://github.com/OHIF/Viewers)

    b) Unzip it

    c) run yarn install

    d) yarn run orthanc:up # starts orthanc in localhost:8899

    e) run: yarn run dev:orthanc #starts viewer in localhost:5000

    It was simple for a beginner like me. But how do I integrate clara to this?

    I updted the port number in dicom-server.config file from 4242 to 8899. but couldn’t get any modality listed (clara-ctseg, clara-liverseg). Not sure whether this is due to issue no:1

    Can you please let us know how can this be done? I am also trying to find ways to get this connected.

    3) OVIYAM-setup

    I successfully created the files that you provided and docker build command was also successful. But what I would like to confirm with you is

    a) Currently I use the DICOM images from the below folder

    ~/claradeploy/clara/clara-reference-workflow/test-data/00000000-0000-0000-0000-000000000000/dicom-reader/input/patient-id/study-uid/series-uid

    This is the only place in clara deploy repository where I was able to find DICOM images. Under sampleData, I don’t see any DICOM series. Is there any other DICOM series in the repository?

    So in your command for running oviyam, you have provided the path of data, can I know which path are you referring here?

    docker run -it --rm --name oviyam \
      -p 8081:8080 -p 1025:1025 \
      -v <localpath>/data/ \    # which path are you referring here?
      :/usr/local/tomcat/work \
      oviyam:2.7.1
    

    4) General info about our infra

    I work on windows desktop. But we connect to remote linux server to run docker where NVIDIA, CUDA drivers is all installed. I don’t see the port forwarding causing any issue as of now. I mean I mean whichever app and port I expose in docker host, I am able to port forward them successfully and view it locally in my desktop browser.Note remote server doesn’t have a UI

    As this is how I work, I have all above mentioned DICOM images downloaded in my desktop. So I launch Orthanc in my local browser by typing in localhost:8042 and upload the DICOM images from my (local)desktop. Similarly, OHIF viewer/OVIYAM viewer whichever it is, can only be accessed locally.

    Can you guide us on how to resolve the issues? Screenshots would help you get an idea

    I have just attached osimis viewer output which was empty when I used to view the image. It was present in Orthanc explorer. Clara Redeploy screenshot is also attached for your reference






    Hello aharouni,

    just an update that for 2nd item,you can find the port info 8899 and 5000 in public cinfig .js folder of Viewers git repository if that can help.