End to End flow for clara train AIAA and TLT

Hello,

I followed your blog for clara train (https://devblogs.nvidia.com/annotation-transfer-learning-clara-train/) and managed to do certain tasks as mentioned in the tutorial and was able to generate an output which is shown in screenshot below? The blog ends after client integration.

However, due to my limited knowledge, am not able to see the full picture of this clara-MITK ecosystem and how it works.

Is there any beginner level guide as to help us do end-end process?

Though I followed the blog, it seems to be kind of blackbox for me and can you please direct me to source where I can understand the workflow?

Is it like

  1. Downloading the models from nvidia registry
  2. Starting annotation server
  3. Client Integration - MITK?? I have installed the client code and libraries (c++) from github for spleen model using example.cpp file.

What happens after this? Or what should we be doing here?

I have downloaded the MITK workbench desktop application. This is stored in my local desktop. How can I make use of the features of MITK and make changes to the models/images? How is this connection established? I am not able to see any “connection pane” in MITK.

My understanding might totally be wrong here. Would be helpful if you could share or direct us to any resource where we can get a full picture of this end to end workflow with an example. Is the blog missing certain info?

1 Like

After setting up the AI annotation server, you can download MITK with the NVIDIA plugin http://www.mitk.org/download/releases/MITK-2018.04.2/Nvidia/

In the preferences (Ctrl+P) set the address of the “AI Assisted annotation server” to the host where AIAA is running. Then you can perform single/multi-label segmentation using MITKs NVIDIA plugin (which makes calls to AIAA)

Yes, I download all this and it’s my bad that I missed to look into “Preferences” section. Thank you for the help

Hello Alvin,

Would you mind explaining in detail as to what happens after AIAA server is started.

Here are my questions. Let’s consider only the Spleen model for our demo purpose

  1. I have downloaded the models from nvidia registry
  2. converted to optimized models
  3. Started the annotation server with engine_pool.json for annotation_spleen model
  4. My annotation server is working fine
  5. Client Integration
    a) I have installed the prebuilt libraries from github (NvidiaAIAAClient)
    b) What is the use of example.cpp file? Am I right to understand that this file is to establish
    connection to the AIAA server? If yes, is it required to define these additional parameters manually in the code as mentioned in this blog (https://devblogs.nvidia.com/annotation-transfer-learning-clara-train/)
  6. We are doing the above setup (step 5), to enable us to connect MITK with AIAA server? Is my understanding right?
  7. Next, once we connect to AIAA from MITK server (preferences section), I believe that I will be able to see the folder structure of my AIAA server in MITK- Data Manager.Please note that currently we aren’t able to establish this but am just checking with you on the procedure to be clear
  8. As I have the data in my data directory (let’s say I have 100 unlabeled images of spleen CT scans and would like them to be annotated based on pre-trained models in AIAA server),
    a) Now, my 100 images of spleen CT scans is in data folder.
    b) As an end-user,I select an image and mark certain points. What happens after this? I understand that I might be missing a lot of steps here. It would really be helpful if you could explain the rest of this procedure. How do I know whether it’s belongs to class A, class B or class C. (like Positive (1) or Negative class(0))
  9. Can you please let us know what is the flow like after the above step

Am trying to do the installation and also understand the end-end process flow. Actually am searching for webinar or videos online where I can get this full picture but unable to find any easy to understand /extensive documentation on MITK-AIAA workflow.

5.b) The example.cpp file shows you how to create a client with AIAA if you were to choose to create your own. MITK comes with the AIAA client, and you do not need to necessarily build your own plugin (though you could).

  1. You are setting up AIAA connect to any compatible custom-built client or MITK for which we have provided the client.

  2. No you will not be able to see a folder structure. You will need to navigate to the multi/single label segmentation, and click Nvidia Segmentation in 3D tools. From there you have to name the label of the part that you are segmenting the same as the label in your engine_pool.json. Then select 6 points (near the contour of the structure of interest) and click “confirm points”

8.b) Once you annotate an image (successfully) you save the annotation as a nifti/mhd/dcm (or any of the formats MITK or your custom client supports). That annotation data can then be used to train, say, a segmentation CNN that required no human input; at this stage you would deploy the trained NN to perform inference in some specific task.

So basically, Instead of radiologist spending a lot of time in manually inspecting the files, MITK offers radiologist opportunity to easily view different orientations of the image and mark the organs for annotation. If a user is interested to have his own training data, then he might have to repeat this procedure for significant portion (70%) of his dataset. Am I right? Else, he can decide to use, NVIDIA’s pretrained spleen model for predicting the label of his/her patients. Kindly request you to correct me if my understanding is incorrect?

1 Like

Your understanding is mostly correct.

MITK is used only for annotation of new data at this time. It is designed to interact with AIAA to use the knowledge of a trained annotation model to segment some structure of interest which that model knows how to segment given some user hints (6 clicks).

A person interested in creating new data uses a trained annotation model, that’s perhaps not perfect, to annotate data as quickly as possible (ie. using 6 clicks + corrections). Of course (and unfortunately), annotation models have to be trained on manually annotated data, but the idea is that you can train on a small manually annotated data set and then train an annotation model using TLT, which you can deploy into AIAA, and use to annotate data faster. This is iterative process which should allow you to reach the desired number of annotated samples faster.

One can also use the pretrained models we provide to annotate, but being mindful of the fact that each has been trained on the medical decathlon dataset (http://medicaldecathlon.com/) and therefore applies to that imaging modality.

As I mentioned above, with TLT you can train your own annotation model, either using the pre-trained models as a starting point or from scratch.

So in Summary the process is:

  1. [Optional] Train annotation model
  2. Deploy annotation model in AIAA
  3. Annotated new data with MITK connected to AIAA
  4. Use new data to
    4.a) train/improve an existing annotation model so new annotations new fewer corrections, and/or
    4.b) train a segmentation model [that requires no user input] which can be deployed to perform segmentation in the background on new data
1 Like