L-CAM
Learning Visual Explanations for DCNN-Based Image Classifiers Using an Attention Mechanism
This AI Asset is associated with two learning-based explainable AI methods for deep convolutional neural network (DCNN) image classifiers, called L-CAM-Fm and L-CAM-Img. Both methods use an attention mechanism that is inserted in the original (frozen) DCNN and is trained to derive class activation maps (CAMs) from the last convolutional layer’s feature maps. During training, CAMs are applied to the feature maps (L-CAM-Fm) or the input image (L-CAM-Img) forcing the attention mechanism to learn the image regions explaining the DCNN’s outcome.
Explainable Image Classification with Attention-based Activation Maps
This repository hosts the code and data for our paper: I. Gkartzonika, N. Gkalelis, V. Mezaris, "Learning Visual Explanations for DCNN-Based Image Classifiers Using an Attention Mechanism", Proc. ECCV 2022 Workshop on Vision with Biased or Scarce Data (VBSD), Springer LNCS vol. 13808, pp. 396-411, Oct. 2022.
For more information, such as:
- Code requirements
- Visual examples and comparison of results
- Data preparation instructions
- Training and evaluation details
- Input parameters
- License
- Citation
please refer to the L-CAM repository on GitHub.
Acknowledgements
This work was supported by the EU Horizon 2020 programme under grant agreements H2020-101021866 (CRiTERIA) and H2020-951911 (AI4Media).