Local Interpretation of Random Forests
@ Intelligent Systems Lab - Ioannis Mollas
GNU General Public License (GPL) v3
LionForests is a methodology for shedding light on the predictions of the misjudged family of tree ensemble algorithms, thus addressing the drawbacks due to their black-box uninterpretable nature.
Research areasExplainable AI
Technical CategoriesMachine learning
Business CategoriesEarth Observation
27.05.2021 - 10:04
Additional information: Explainability is a key requirement for human-centred AI approaches. Using classic unsupervised learning techniques and an enhanced similarity metric, to wander among transparent trees inside a forest following breadcrumbs, the interpretable essence of tree ensembles arises. An interpretation provided by these systems using our approach, which we call “LionForests”, can be a simple, comprehensive rule.
The interpretation technique implemented in LionForests is intended to provide explanations to Random Forests classifier, towards a trustworthy machine learning component. LionForests actively takes into consideration the internal structure of the model it tries to explain.
LionForests component is GDPR compliant (Articles 13–15) because it is providing a way to interpret the decision of a random forest model in a form of rules. This approach addresses the "explicability" requirement of the GDPR, where a requirement is fixed for automated decision processes that have an impact on humans.