HeatMap GradCAM using TensorRT Engine

HeatMap GradCAM using TensorRT Engine

Could anyone help me out in generating the GradCAM image from tensorRT engine.
For this, I need to extract the custom layer with its associate weights.
Note: Implementation is done with c++ api.
Thanks in advance.

Environment

TensorRT Version: 7
GPU Type:
Nvidia Driver Version:
CUDA Version: 10.0.3
CUDNN Version: 7.6.5
Operating System + Version: Windows 10
Python Version (if applicable): Using C++ with Caffe
TensorFlow Version (if applicable): 2.0.0
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Two approaches that you can try in this case:
a. Create a custom plugin to calculate the gradient function for GradCAM and attach it to the last layer of convolution layer in the model
b. Mark last layer of convolution layer as the output layer while generating TRT engine. Post inference, use the output of TRT engine to create a GradCAM visualization as part of post processing

Thanks

1 Like

Thanks for your useful help.

I have implemented the approach 2 suggested by you. But, even I am end up in getting the score values rather than last convolution layer.
I will share the steps which I performed:

  1. While saving the engine, I have marked output layer as convolution layer (ex: res2a)
  2. During, inference, after execute() and trainslateOutput(), I am getting the score as an output.

Please do the needful.

Thanks in advance.

Can you share the script & model to reproduce the issue so we can help better?
Meanwhile, can you try the custom plugin approach as well?

Thanks

I’m trying the same thing. Would it be possible that you share your progress?

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.

In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

For your (b) approach, could you specify how to proceed after inference. Adding the last conv layer as an output was easy but the backward hooks in the gradcam implementation for the gradient calculations are the tricky bit. Could explain how you would proceed with your post processing, as tensorrt won’t calculate gradients during inference.
gradcam.py (5.8 KB) resnet.py (11.0 KB)