MITK - 4 channel input - Brain segmentation model

Hello,

i have installed a Clara SDK AIAA server to get auto segmentation in MTIK for brain tumor MRis.

So far this works nicely when using the model segmentation_mri_brain_tumors_br16_t1c2tc with 1 channel input / 1 channel output:
https://ngc.nvidia.com/catalog/models/nvidia:med:segmentation_mri_brain_tumors_br16_t1c2tc

For my PhD work I would also like to use the model segmentation_mri_brain_tumors_br16_full with 4 channel input and 3 channel output for getting segmentation results for tumor core, whole tumor and enhancing tumor:
https://ngc.nvidia.com/catalog/models/nvidia:med:segmentation_mri_brain_tumors_br16_full

Later I would like to run a custom model with multi modal input and multi class output.

My question is, is multi channel input with MITK possible right now?
Also if I would have a model with 1 channel input, multi channel output, would MITK handle the multiple segmentaion results?

Right now I get the following error on my AIAA server:

ValueError: Cannot feed value of shape (1, 1, 224, 224, 128) for Tensor ‘NV_MODEL_INPUT:0’, which has shape ‘(?, 4, 224, 224, 128)’

when trying to run segmentation_mri_brain_tumors_br16_full.

Thanks and many regards!

Tobias

Hi, schaffer
It seems doesn’t work with MITK.
I encounter the same problem.This’s my question link: https://devtalk.nvidia.com/default/topic/1062891/clara-train-ai-assisted-annotation-sdk-new-/encounter-error-when-inference-by-segmentation_mri_prostate_cg_and_pz-model/

You can try the soulution gived by @yuantingh.

Hi Tobias,

Thanks for you interest.
It’s good to hear that it works nicely for you.

@RayCui is right, currently MITK plugin is not support 4D image.
And you can take a look at my response in that thread.

And as you pointed out, currently AIAA in Clara-train 1.1 does not support multiple input/output.
That is on our plan right now, will need to wait until future release.

Hi yuantingh and RayCui,

do you know if it is possible to run a Clara AIAA server on a K80 powered VM? I tried but got an error message that this GPU is not supported, I had to run on a P100 VM which is quite expensive.

Thanks and many regards!

Tobias

You’d need a Maxwell class GPU or later for Clara AIAA server. The older K* parts aren’t supported.

Thanks for the quick response! So Tesla P4 should work right?

yep; that’s my expectation. Some more insights here:

https://docs.nvidia.com/clara/deploy/ClaraInstallation.html

Ubuntu Linux 16.04 LTS or 18.04 LTS

NVIDIA Driver 410.48 or higher

Installation of CUDA Toolkit would make both CUDA and NVIDIA Display Drivers available

NVIDIA GPU is Pascal or newer


Kubernetes >= 1.11

Docker >= 17.03.2 (or 1.27)

NVIDIA Docker >= 2.0.0+docker17.03.2-1

Docker configured with nvidia as the default runtime (Prerequisite of NVIDIA device plugin for k8s)

Helm >= 2.11.0

At least 30GB of available disk space

Ok thanks a lot I will try with the proposed GCP configuration using a P4.