Visualize the segmentation model with Grad-Cam

I’d like to visualize the segmentation model with AH-NET,
I tryed to visualize it by referring to included in [clara_train_COVID-19_3d_ct_classification], but it didn’t work because of the following error.

tensorflow.python.framework.errors_impl.InvalidArgumentError: 2 root error(s) found.
(0) Invalid argument: Incompatible shapes: [1,35,35,16,1024] vs. [1,36,36,16,1024]
[[{{node stage6/add}}]]
(1) Invalid argument: Incompatible shapes: [1,35,35,16,1024] vs. [1,36,36,16,1024]
[[{{node stage6/add}}]]

The documentation shows that the tf_saliency_infer module is based on simpleinfer,however my model was built with dynamic-shape enabled, so I guess it didn’t work.
Do you have a tf_saliency_infer module based on scanwindowinfer?Or if you have any useful information, could you tell me?

I used the version 3.1.01 of clara.

Thank you and best regards.

Thanks for your interest in clara train
unfortunatly we don’t have an easy way add the grad-cam or visualize it. We will add this to the feature requests to work on in V4.1 since our V4 is about to be released in a couple of weeks

Grad-CAM has been developed as a method for creating “visual explanations” for deep learning-based classification-type models, see This method is not really suitable for the visualizing a segmentation model using AHNet. You might just visualize the probability maps created by AHNet directly.

You can remove the “ArgmaxAcrossChannels” post_transform to save the probability maps instead of segmentation masks. Please also change the “WriteNifti” writer to “dtype”: “float32” when doing so.

Thank you for an answer.
I am looking forward to v4.1.
I will be happy if I can see the reason for the unintended segmentation.

Fig1 The inference results appear where there is no bladder.(using bladder model)

Thank you for telling me about Grad-cam.

I was able to get a probability map with your method.

When I checked the probability map, I found that different image for model and class.
Could you tell me about this difference?

Best regards,

fig.1 class image (using prostate model)
fig.2 model image (using prostate model)

Hi. So, when you remove “ArgmaxAcrossChannels”, the “SplitBasedOnLabel” post_transform doesn’t make sense anymore. It will look for the class index given by the argmax function, and save the result for each class like

result = (img_data == idx_number).astype(np.int8)

Now, without argmax, you just see the probability being thresholded at values >0.5 in the class image.

By the way, the prostate model produces three output channels. Your probability maps should be a volume with three channels (background, central gland, and peripheral zone).

Also, you could use “fields_new”: [“model_classes”] or such as an argument to the Argmax transform and then feed that new field to the SplitBasedOnLabel transform. Hence, you could save both the probability maps and class segmentation maps. If you want to split the probability maps for each class, use “SplitAcrossChannels”.

Thank you very much. I could save both the probability maps and class segmentation maps. First, I would like to examine the direction of model accuracy improvement based on these output results.