Category: 09. Vision models interpretability
-
Grad-CAM class activation visualization
Setup Configurable parameters You can change these to another model. To get the values for last_conv_layer_name use model.summary() to see the names of all layers in the model. The Grad-CAM algorithm Let’s test-drive it Create a superimposed visualization Let’s try another image We will see how the grad cam explains the model’s outputs for a multi-label image. Let’s try…
-
Investigating Vision Transformer representations
Introduction In this example, we look into the representations learned by different Vision Transformer (ViT) models. Our main goal with this example is to provide insights into what empowers ViTs to learn from image data. In particular, the example discusses implementations of a few different ViT analysis tools. Note: when we say “Vision Transformer”, we refer…
-
Model interpretability with Integrated Gradients
Integrated Gradients Integrated Gradients is a technique for attributing a classification model’s prediction to its input features. It is a model interpretability technique: you can use it to visualize the relationship between input features and model predictions. Integrated Gradients is a variation on computing the gradient of the prediction output with regard to features of the…
-
Visualizing what convnets learn
Introduction In this example, we look into what sort of visual patterns image classification models learn. We’ll be using the ResNet50V2 model, trained on the ImageNet dataset. Our process is simple: we will create input images that maximize the activation of specific filters in a target layer (picked somewhere in the middle of the model: layer conv3_block4_out). Such…