Fast AI Assisted Annotation and Transfer Learning Powered by the Clara Train SDK

Originally published at: https://developer.nvidia.com/blog/annotation-transfer-learning-clara-train/

The growing volume of clinical data in medical imaging slows down identification and analysis of specific features in an image. This reduces the annotation speed at which radiologists and imaging technicians capture, screen, and diagnose patient data. The demand for artificial intelligence in medical image analysis has drastically grown in the last few years. AI-assisted…

Hello! Actually, I am trying to reproduce the above blog at my end. I am able to run example.cpp successfully after making some changes to the code that was shared in this link (https://docs.nvidia.com/cla... and was able to get this output shown in the attachment.

Can you please let me know how does MITK interact with AIAA or is there any beginner level guide for us to kind of understand how this interaction works? Or is there any tutorial end-end process? From loading images in MITK to running annotation server etc.

Currently, I don't have a clear idea as to what the flow is?

You can watch this video to see how to add new segmentation in MITK
https://drive.google.com/fi...

For more details on MITK workbench can be found at:
http://docs.mitk.org/2018.0...

You can find NVIDIA plugin in
http://mitk.org/wiki/MITK_R...

Hello @sachidanandalle:disqus - Thanks for the response. I have installed the MITK workbench in my desktop. We have annotation server running in remote server.
As an end-user, I would like to seek your inputs on few things. Can you please help?

1) Once I establish connection with the AIAA in MITK, I should be able to see the data folder under "Data Manager" tab. Let's say I have data segregated as training datasets and validation datasets. So what do I do from now on?

Is it like I open an image file and mark certain points/annotate labels for certain images? Which part of this is automated?

Once I do this annotation as shown in the video above, what happens? Can you please help us understand how does this work after loading images in MITK.

Data Manager tab is nothing related to connection between AIAA and MITK. Data Manager concept can be read over MITK User guide.
For annotation, using MITK kind of tool, you pick some points as input labels for annotation.. you start with some random labels, and based on the result, you keep iterating over the different slices to complete the annotation...

it's like this, if you pick the good slices/points, you are certain to see quick results and less manual work in defining the full segmentation over the organ. it's not automation here.. we can call it as semi supervised process

you can go through the video carefully, how to load the image, how to create a label for segmentation and then shift+click points and then call the annotation action

Hello @sachidanandalle:disqus - Thanks for your response. This certainly helps. I believe call to AIAA server is made each time we "confirm points".So, radiologist inspects the patient image and marks certain points as input labels for annotation. In this case, we saw these input points were used to identify "spleen", which is the label for this image. So Radiologist repeats this process for a significant portion of their dataset which can be used as training dataset. Rest of the unseen images can then be used for prediction? Is my understanding of the flow right? Radiologist may decide to repeat this procedure for creating his/her own training dataset or he/she can use NVIDIA's spleen model as well. Could you kindly correct my understanding if it's not right?

In addition, I have a query regarding Nvidia-ai-assisted annotation client (code error) ?

Is it the right forum to ask?

https://drive.google.com/op...

Possibly a good video to show-case how to select extreme points and get better results for annotation.

Nvidia-ai-assisted annotation client error-code are little generic; let me know if you are facing any issue regarding.

Hi Sachidanand,

Can you please help me understand what does "Nvidia SmartPoly" does? I am not sure whether I am using it the right way. As attached in the screenshot, I am able to see some colored slices in 3D view but unable to interpret it. Can you share any resource/example on how to interpret or kindly guide me to use this the right way?

https://uploads.disquscdn.c...

Its like any other polygon editing.. you switch 2D and click on Nvidia SmartPoly action.
This will show the polygon points for the segmented region. If something is wrong (say for some slice) then you can do manual correction by dragging the point correctly.

I followed tutorial_cxr for training classification model. But, but I'm failing at running tlt-train command.
2019-05-22 02:08:14.053192: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 9944 MB memory) -> physical GPU (device: 1, name: GeForce RTX 2080 Ti, pci bus id: 0000:03:00.0, compute capability: 7.5)
Traceback (most recent call last):
File "/usr/lib/python3.5/runpy.py", line 184, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.5/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "common/scripts/train.py", line 25, in <module>
File "common/scripts/train.py", line 20, in main
File "classification/scripts/train_classification.py", line 210, in train_classification
TypeError: 'NoneType' object is not subscriptable

Faced similar issue when I ran tutorial_brats.

Traceback (most recent call last):=========] train_loss: 0.9822 train_dice_et: 0.0073 train_dice_tc: 0.0178 train_dice_wt: 0.0063 train_dice: 0.0105 time: 32.52s
File "/usr/lib/python3.5/runpy.py", line 184, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.5/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "common/scripts/train.py", line 25, in <module>
File "common/scripts/train.py", line 18, in main
File "segmentation/scripts/train_segmentation.py", line 316, in train_segmentation
File "common/trainers/fitter.py", line 500, in standard_fit
File "common/metrics/val_classes.py", line 165, in get
TypeError: object of type 'NoneType' has no len()

Thanks in Advance.